var/home/core/zuul-output/0000755000175000017500000000000015134632772014537 5ustar corecorevar/home/core/zuul-output/logs/0000755000175000017500000000000015134640500015467 5ustar corecorevar/home/core/zuul-output/logs/kubelet.log.gz0000644000175000017500000117025015134640330020253 0ustar corecore@sikubelet.log][s8~_˾4m\u'&It 0e,'N mvzjX}\qAˑn2l7-?χld5'~.ۚV Zq2QJKV;u[J`+JRM+G9jQ_FfTњ.5s9|l09Zr:s6/h xbrodhh.eܬn\d́Ms@/9 GdE֟/QG Q/ *(?: k.6FIZ#al3gN|d6^jT/ O'^jkK&^h/YD|)*/Mǩ٬úr q*;5X|Agvz]ߙzDQLC 3D܍/R/ FzޟC?~4zJ!Up2< $*ibZ4b5 jH;nO_=Gq~)Jd9WQ[b>q8:Y(i!v9\wo/?\Z,ywTaxNqvkZR8a5A ]2Y9 3q xn{tϐLv_3LS'ɸLl?QȜq:+E#R'vV:e즭pT~0_a(M#ћ;uOk OKvKZ¹!sq\Y~Zտ'8:޸`[ v~qޞ|#PώO^T* GcE~: 4Mv?0N;-PZ'hsqn4|66s vQ6-00E}:)Xr~:gٍ .Ebƽ?a,kkЃ<]vz/o%Y~v59bwΛZZn<5X@N,`͟p. {F F %/0HY>z`Q`wA.{ֈbI@FP"{*}V]qmBmh^,ǫ :%jzP|~q-Գ.xHw4n)#իVҒ]X-8[zB^˚Ţ9n8GFKxy!u8>ΛyUL N= }r]JkpTvWaZ8XU;xߎ{tz8D[V^9e\=N-}N~@W s9 p5,u6Uu=V82 ּ%F)^h#> {e]]u%ņ'5SVBϔ*ò)2#fBz ߱\ rR^ =W1-X)Q|*~[hRsY-O)/rd00Mv79yq9'\"{{˗gxgWnˌ%|TE4!%|]i׬^F"Oc20p]xW-3*)sԺFtC$re9:w(2:2uvv!*o.$+J֬E8?Tp0|8KXmkHm2M: R"p ʚ飠3%aEFK<alCJp})FZU(:V9K``7Y'չԷiVi"_m!Ƭ_   bpu#3cD||A^?t,! *fHǁ?S/2޶8|,'J0O=I07&@eAV8v_3qq^*$C>l8Ⰿd AX%/U !DJK< i@ )I 0&6ӳLS_bq)0^OaB9@  $'qT6G5.(1m9xx00_ >(d9砺80L)N^s0(ȕ[0 G{HtƁoqqD!~#LGam&|h)OiK`bAy} $qTC>A NRψԷݚ+r͗?nD$mK3gA>bsq8}vxɑɶ_7x 7`(yE4 N8ڻITs;(4JTU=4jUd,QT1 85/"N=0Euo64*ȫ 1&fr*%N_6ZlE!gv6 &d+pU\Oda?V]KiRXvJ~EJ NQ ͖Juc6h_*`6P*QG9ҶG8|~ 41'sSj;_KK䣚6kU=}jֱƭ'#kZ6{vg4Tp33frhZcw\:" c&9yQWPlJiJQX!CgY(׸Rr# lJyG5 }GYj$6QPv2Lkeҩ}H hV ,hvØ/vZCKZW`S^hMsmU!Ir^=9o9^1<ϏuÁv**-?L ll5Ȥhz3L#S \gpͥo3L$WINV3]hw7V ق1] XG~ ,s|6w\t$8Gc[ ml # `l6\gY"PVܖEɖ> DF 쥘BDrd5i4XU(|XM1u먰tf b91hF)V4( V#M5 q( M@3gs؊JS3h.@p@l|3 )S8NN^%rQyT~tg%D̾]պS8  IXt7iE*/kٙOf $D@*piV*'8DǤSH M&i!1Lw5,آb*c=41 yH(ӁPL>N:mk# )-lQ3ƌcS6* !e%G{[2lѡDJ^{~yHx Ђۨ۴TBLv}lAUXX_̟͟.]Wխ'Ae$K;a}ZHG"q=wKNGQѽN'oں!$nl•JSK]5=nnKQf _ꯝXN~V)3XHՃv~ۅ%nXER,)eƔӂz[j‚ߎ̡U࣫[p=Q,hoVG{:yI#`0@EV$T%`᳠u9 `>DY4]7>L]j9~ݱ TsjCg睃\D  =[jFlv[hqdzЀJ4I6 _#,&PBTIziITv?5\ˬ[`Ir( A05GCkUyt0z0`+m`  Cw0Wp 풁J2~$tv79ЈѐzH{?uN"l^LFѥ;lGBD{޲Gi'F;f?ǀnG/%@NBw6<h󍑷v3H g^[?VaYDgB0-m&$,i(CC˕zv^.qT;t`}K v?PߋV|",N4"r MB#¼ٔDi,oO%K-4{av K7Fi8@$N{A;;Aѽ]~r՚I/T`$?DK<חBrʏF.q;A5B 0ٱ`g7I1|Ƃbi(0yJ=FԈ+Rv83] B(C Z@"KYl,q@cy (D6Dy {:7 wqFx6jOD|N b_=B l :`uAU EN ! @bwgPɼSjEVDX䡀xg h; PE>{ßyd pGDsG}{_>6ɨg6YV]qj::Ğ1dW 6kNyl) >'ΨEV0&l3fIDH- um0+/|~3]px69"37,rm<=&n6ЫW2b۳{Nܥ6)E|9r1.bG"՘Mwzof BKbpWu  b[])lgP| =[Klw S0qk,`ly)`10ߩM `bK )gp8[V=f 7ˇ{;*&RSؑ@@dA>-@,sខpК@s ]oK bC(z Er+{kq1 m>e؂ /[3>· 2]jI?ϭpAEM^He 3 U\Ȓ(7 "m*˫(ּ/NC|q@jeHʆT=ub0tH4tB4{S^a]Wn5!E]YZI5"˂a]&iȖc_>HD]4S*bHeyV>A=*`%!6FR@WC=7D#r#uK3VT~dRo"Ҳ&a$c{/έΚ}*=ٿ=O5|g65LW5iPĨu=Hνz Zwkpp pJ0JIY;J3"$VӪU[ \nYc %L!EbeArd6NZlt:`VքurݹgM؃5s~ ;>+X֯o~wX1g3>n{}ףOA2A0i9;;[#7֮t|![Ys: u̜ r~Ie.N#5U #=1OSFmDԴ9ś`@%qf$KHm_'*+JH&R\pTCQf47 Or+@c^''>>(C^~=/R̓aWWy꡼(^bc=Je$(Q\维>Ti|h î:D?Fr2ꞹMHthEzB:J‘W/_N/ܓ!UP2)P-&QC'r\BG5qaʊ\Z H8jZ>k<,73ZF{Gay,X0hmbEŨD3Q>|4Ѡ4V,3 9c54կvFw Q!'8}h7 s~J)m#Zޖl Ch&- ΆVEemdz]. `%ekF0woCZl-rB4=kJl68xn3mj,=>VW{1c,?Q|qHM'nn3B[maәX{ {xؖkɒ1XG \v]dZM[Ry,ֽ33*?7` gq/P\ڢx,I'( F W˲y۴cƢ4HT/ |, ~[-6@Exq2)=d+߇ѓaitdRM\ DջjzJ^d%()@ `I'ⳤhiE?լEQm+NX)WF_EO .SDiύ)^MqU l̃nB*rOfr`@ۀ(rYx{;M|!u |ke-eG({ lB׺mӧ٫ʶݪgҵw6RnA=HuH*z=Pz[oA=H*<[nAh= Hh*F=聄F[W [ʷ'?P@Bƫ[oOh8Bo tI8T ,az׶OzJ)dSͭ>:ߢirȵ`|Zl4'M\MU5pu9s]TxٹiiiJ )e8uCҺ xIؙ T zoDx5n}i/h?_硏q!i9-wMtN^"l " * GO?! bA*9ziĠW}iYKT{ NE  -ɕ ёfVd򧼰x!i ~h"ߋoؖ'ٜ G." E>)4;8=T/~3.b?˪HK1(!)yujwpy$8=W~w戣![rKEx*vamg ?s`2(]USXi!.@sʒɇR['/_AiάE:t17r)@{처u:5^6Gh৴_B^PsY߾8i e~nAE)e:ㅠߋT8|B2i =U_ie$=縜at~5C |=@)N_Յ1<) [^P&Z)ZOy'55-Og&vYT9vn [ c%~&Ey󦜔4e ^(\"j%"-'; g[bqy\,&"Ad&I1gshc8-0#u^LoW^vlɧDٓ(;æ>[ah,KȽC4&sņ y]lk\vBw-&*vIBƸJo't7kFKF eZ/%Q* iK2e H3fo&bW6h%$[,-R*@qm^}.F LXF}b0Cidh+笇npy!"$-1$wUF"˪nI{11õ `"HwȜɚN+ɫuZf*!Qf%Z;c,QzΌ 1-2-wM$C˜vH?:2;E3q`G` ˶tGԭL͵s,r}YLAZtbJpH]kK/]uѦ;6d 6eL=e;~ƙWXx.4YhMFOR7Zw^0N1j5^+EK* " 9*!5F).FH&P[<9;+V[*8x,u WI7" @ ը :(`f~о:H 70GL]\& GٹƁV"[3(+ njFi1O|4K=pqAI-"ڗ|BkY|9o{oOw+xyuFN* ޒ;AXЄsjtOf1ޘK1âl²5TUФh3cqD/0d8#jq4!e%buiǿq؃FtY;ysw#ֿ׿CS-0190ד\v‰%Έ67 h6. |w?dD܃tsi/2 '"8?i_ 4|Mi0sKE{TGRm#/(Gk i>=-Jwjuhzasl"?NQ.+\N.X"&mS3[;]㒽WxzWHMxεcavu'j{:_ S@{-JIJ]wOD_f$HÇ3< k.ǿԻ7=˴lCR(LG:)wSG>e?w:h~+-sywȜsf: t(NAe;yᬏN.{r.o7e v%}nş~L*nGܸ=juvu_x[@9! P,.~S?>7j [9w YhXɘ@U!2k7E:3Є"=CYiVTse2Oe:oAruE`7;]^%@)}/b6\aQ@Sg0IN˿&vvz(p2-yZq(=Xcq$b z,w36VI1b,G XIP"k25h8R,p.8}QwSWc] #A4hR++ EPAx$X1VX ˯h)qiqd!D]IVnX:͟>&xI^q&I.qtY%H(!`\IbQx+#-X2󁛷7 !N^U'nrK CMqTBM^`~gCD!`AjW=B*/wj>_|W$l,Tx-R,"m_%_rG7v-"H9 ,FOgh8R,J4f. #z,71I7Y6Bfp/?#VO0w==ehcUDb$t.8 pq0V ]Q9RpX:&$Ys#(y OHm-Ds# f쾶5&.`"3l gQ(3 lO\2:8v4D1PNQqKHZG0s ŝE$ p0jcu\\Z(krG1vb,*sf&̑F0d!AFjKūB.)gxݷ{E 1" >b,4z鳝X U`a@g&r0Jߪ͌VXb.0dVNyZ9劁!;JKsRޮϩGu&8^JP2Xr=vѱ0CoS-w)MK) ropQDޜmPPJ4)gXJ4)`00o ,6iwje_ǂdY(glju-I`% SXC˪̾J`j qbwlp50^ج%^ dN U}L/L׺]pmKiwqpDp' om+8IuЍ^SXPY!:or\v},P,o@w8ǮM?֊)Ijyl.$:FÑbǻs۸f368>oܗ>qع olSH,p!G$(R$ܠY:OyK4c(zCg_"̜WYif,ƒp-Ž x7^&5=àCM9+'&K'8Ǐ=pTN4t괽{ڬOAu+.8ʱBWL:T(`S ?%{\nqr sɬEa&! R0ɂqp0Ytׇd")C,`T"uDZBt:y)bqGLiAǢ{/Cd}(-aǼc=_&E_.'9jxx\.a]HyE Syz^ )zj"v ɹ34F:wQ GH z`e3]b ȒrS((`1Ӣˉb~,5|<}eɱV䨹isFdx5Ί' P,,ě;vѰsz[8.phNo W}:4[ÔBwhϕ=H 풕k>^)>d9*=o##4"hSZX<GRBP!hWDYY4rc9C3!KifvoV^r>;msBˢ;S2kܢ̔<ߔu q@].:3)`}H_=_\t"U=NYmztѿI p|t֙CYYWn`Vj7[(S Wb1:L%X:=C_n{tsH0 qɀ*A<ʒ(WĞ~o7ItQ]1Q"_q,"SNz6 v&EWFiF 5@ <*|+N_z^ozqoUR?w(~F)CHkF2:m1zܔ5,~g9l{K)}a@\ tq)hR0L.)+}fqվXorlf/yx­,8A$.T*r* HHx…R_4ցCht@V~)8.l~1"Fd>df-1tE0TdUQ.œqzayמOmp:̺?A;mX/rƮ`sۻ`z#F]E43E4l-W>kV 88?s]p]=spwV n )3;XzyC3XU SHAIg`Xj1BKs3sϳg%tʑ.8Vr?w@)?iG!}e1ڎ"qYBK(nwtBQ_V=hHؽ-]?;X,Bmn>x)L +E"ҩy覕q 9 ٹ0X{b[rOэĜ9JT #e PA fg̣G6ҶG@U7k<5hx n~JbqLƞzTM{<)U4S庆ZO6C&b; k`m)uѣ][FUa\>BE2gf3 RAF+ZmUpV=1N, ||p¢."n5ir&N "ǪOTtQ:NMP^xwvqzFkZ96sX|<$)c3S/1LQzH3!fIom1]p|`Q-k m!9@sk)Nį]b|+O]ґVo݀S[iqGH㳧z.e(mOqGUNq[ڤ^5e,RCm! 8`^ 'Vc(NCO]p> ^,tٟS+mUzƧn#`sggC ~F 78>W-xv^BQ ~?]Fmە*ޅO=}8,unqp6El^o(E$)),!. r: šQES4*m d!D]IRA&X _ , -n낣o.!GvL~I%o=K)'q9}삣JE11Yōo3H"MNg)NS>tuΕu=ek7 n@Jnnp8|{ #2ݱSzD߿ٞxOr"[L]|v QY#d WJo 2BU$feEx[i/BX aJلs0lpyɔ3+(cX*d*Mَ+bcۑa~Օ߯:fmq/vr"傃\tR@=GR@Å8p/rJ\*=c}e]p-7L80B/7J.9f =jFC{$-\1 UtZ UX:qIJ Pϥ/%M`p%p1 9WTdDM,gD޵6n$B\6~? A03\v0l?6$61H3bUEYe(E5U];makT0 9,Q Aen24I8 t MFڐD|:1Y;483Gsm Qh3l&[ֹdZcl7׍ èH+OMvq˂(Mzu΃reBG*.bW KGBrBQ=VgTPku2$8DK4oHs)!l8+b_`+mύ #)0U|#uVDZ}װmfhMhJ2LrfǝU [Tt`ZKF>K 0d6$D)-NJBuYύ9;;r-A0M:=U}"i'YRgg:N:m>ɮTuGӊtXU0 |)\(AfH6L_Tm$lRXЭf}7l[uI:a'iANXoR5ۂB\ *VŌ~"ׅ~ps*eJ]'Z.xL}n1"+|Ky0kvk2RtϷ @"R%)EFB+Bރh2҆4W]f$96È oJ 8[gJ0lڲeH8Fu'%$"^KIoO}d.;%DDZrpIXz1nd6 6OTDBʚ51hC9^']S#}P0D|4$ ذV?qbmFlt70x'j{pRd#NpGiydYeC , bOԆKe(HK$n5iuIsB[ĽI~;4c'JT|ex,P5|MF;"?I L}%Zu5gŃlVs u%ↅ8c)h͢>IjM;K6˞\hWOjut%|1#tuW!jH3tDtQ2$Űc3M|>SЉfY95+8ބo;V7\b4V`Xu0oxEH]Y8rm/r\,8yҏ Vp$ly| =k+qyI&sQiz smF .낦XyDRGh&a&~J6M}0'&ͼcPz"8оcM2t+Uk'MF*jwy8O}Oiw }BBLb I fͮQV_WqsũihS145W!幔_FXlX}dmbXRzʭKnžL( z0Nk5| F4iDx҉(Uϝ\ 19 ,&abMdla\r[d:d76jꇉya̽x/pTP&MjAJjWidЋ́a `drK&SS׈WS0[ #f*-O?MWcbv&9r/[3l:r-k3#kI_N%Agoda{4+?A7f\_qG?xݵ0A N3"Kmf><V3j]Ue RN8z)xU 򓙇7z &N~( G;2-t|LIOkRoxcÏnA4-Rd|HGq#PK HtǛlYoܽc27dI)5l]$2r2% `wT7Io pywv7?-IQb|wtq0^fMHlXZRMh%"ǰm ΂Ou->\|>/ϝ|t:#B0FgUa dd/9B߀.fD.v7gUg3E`_88iOX"ɍq6v}M ,zzL@ƍqI:{D @?l&LWO&h2vg_y?YI0ʊzoY\Tk^ؽAu>@4Bci+u+1rޫwKWJd+|`ڑ&@0u ?F6n.f+)pe)8 )DV?MǨKx(gY鱆Ĭ00Gt007]B S{/RU;qr: kL?J l)Iϼwe0އsvÑrѡƒLպ_`J İ\ )^n@Ra4" Jq6 9PÑai䐴x@6Vh>Huv8Zxzz 0a%# F6(0`J aXv<&fE bbP҂ԫ [WZ䰸ʒ qlP(7<4lQat>0OBkIֳJv+Q_K$/&^_{gbR'Pמ?Yˮ.2#8^ p%jT0vۯ&y8~ӏ.wM~7ZC!6Ex :#2 jMqZ]*x*nMg _'-̣{m]XE1ݫwG&;|>ˈ+՝X8 z0P% %w=sZ1KԗT'&ڝbE}~pU+t0_ֽFֆ|mE /H7\>C>XWDrZN&sk?@EW~~@+}4;$浔Y S@qK6("١YNy>$h Г gOegضʋ ( {:bSZU4T/SzXFJ ZC8=m;*?`(ZsL.>"49*=x̹{+⤃\IJbӪ')j5U{$.߶X9bm{Q =2V&Q H@Ku4Rt:Eȓ=l~{!3M.}^{MXRĽ.{{V!a/20LМ2*%_0Uord0Gyk"jWITҠ}~1HCW \HЅ] 03Y^,ڣm"Ke/WsHjuĭ j]xYZ̈́[1l8S}22 w3{. 4 ~`KeBW_.d>28p2piY $J2!bDL%aP3Ac3~w+P .`(dR=nP@EއF$r{_B~yيj@>L ngp(|gڥ0Of:)"oZ)`8tc'z^^}xXǛ-4), Y$ \ `͚-3**Le7%jߣ9H*+q #a2КVh?Vw->^gɭ1۶?8_r@kA.aWW$*h娚P:_O]ql ?{\9X'5R3ж ;sin<μo槿_TwxLDs#gyVÇ2: 6s< CnX)NRlU 6:A2HUhHAw<`]I.M$`cAbbL`8B40`,cDc3u%ku84q.ցe7Q H(=0{z:mH0*uXE"ih)d +E $cЁji]'MDaE5p4H2dicYƔ1Ģ8HIa񂰍 }#%fA/ JMSSY9PTu F[:eEĦ[1gڟk7T+1~ dVL|&Z8 6a| %\_{̆gUF00+\uQvkni~WpMovsXߡV?o]p<3p4J:rs-y!sNb-d^00< lHHquc`ݚcmHe 5'A`IΕHHp~}<:1(Y"1IĵC_I*k*l@0nma41.ahEE*F=aBiT+7P[\`vbr=Æع9ǣ;씫"ĊZtWM H??>BR#P̖".U݁ƒ_{ QzOo˿)̏ΖfkD}hj hcw^TYjt!{eVY%\\bzȔg|6YTGHTXގȕ3|'\J 5M>FC0}`JO< 'e_3Prvc߀X('c 6^{Ȩg1fދ)WrV{9<L"uCM]JH$83KpmR?}5ňv `y<^ꂺZz0GREr Oc)cy.vfӉld 1C9%zXZ)OqL Gu:j>t)3x.1Vxbsͱc9Wy$["jCfѮ[jƺB) }=wKK'W01@b'4!њcmĄiJJΧI׫+ȥScŏ~J %AA`%f+ : eZ2* NXsX+L0M1´0s)&e:P c=@`&wtDrD\d§Pr F3Qb #9R% _慌o8jZs /xHP( !1E[F#VI --v[D,a:`K̀䁆Ņ(4?is)89~I)X(G(X Wd!],eg;]Q8 T#l}H^ ̨2lFb!aR| x縕ߠ a2Xc*H3(- (<ҹsYPҠ5aB~gR18!= ~96 -SGk`opKP:Z v {ty ~>}Iu0ٳgP ˢN8sC{SY¥VՌYaƧ!72F'9תg=Wr`̞?mO 8TC}Qver\,r(p6ܭǐo+dQ\Uv˳qF!{`?fT0^K>gW\Wwzk˳,Σ-w9oԬ1 1 е Hz; oV{?ate{'h}KHdߛC ҏz߮@͎{! o!qd(ۿD22&ǎ6f{g]^O2qUN ykVf7= )An$r]/O׫_mi;P Lo;~_jBlAc6IT~.>-.稶 qJrK2x>]FA%_M,Ƥ{X=b9{Ԫ%Szn~,P!d&CEt:\8hj LMDCXՎy1hٛGzz40`h,`z(^Fĝ9#5h".gRCY#AFŎ[]1F0(] *D#b7բ ܛMNW5*J AkGom7?ӯӕWL!ELWO'F|eN'PospcHuid.'epʕ ٤pluӬ P<Qdf:OfgK8B)lB``8x<?U/6 FP Ӹ`wzuR)h  Nx}85íTFi3 uThj )v/Fhu>swrr?z>g\C̢X ~6'$A<<ۯ+zZӲbatCP[Z :S|/k3gPEzoy"'`Pؽ7`5p:p*|0^aP]|W*O .(-+!땦h4NضJUUVTxHgE۞4"]IŠj1/@a4lg~u~b727uګRZ>0AYs8>ԴpoUe{(?'BX,WQl;?``_w]:y?xe|RglCUEw-<7޶ym?k.!W8i V{>}hLt !}H3uydw"i~B1]xqzJHL(-]F7-X*dF1qI`Oz%tI> SlSU3)>ztǹ5,I24t,U 8Pd^k4Ygk'4ڪ_o`dB+:p7"|\ kK"S&~gSh .rF0h< j-M\ެqy ~:tO_oK gwY{#ɀxP'u,@|Ƈ6u77C+$VPv5"&HO+!&Lv`Xo{vwTgYOíUb:6}D4~CXo&^51~c C JN VKη恘ͺ1x^ 6$%8<95f+~O7K3ёQFb 7U;12F:%M6>X+-6>X䱨ecj q>Xsl$$5mګq mj%Tl4ڕ[-ߤ2*?3K5p!ՒlQ2EwaLeSiIB& "uf%Rb{W zlZc-} ИL5zF2٨-޽On#h Rk gacb{3ۋHnT$)2Pb[\>^$d*&L[\^dݘHhSHlq/v=@]{/URPl^^Ec2rd2i-wlE?)md9#R3* FQiOQ{מKsb{ѱH[l`h^R\oq{z 7#DyW/UqƔzLް^bg農79[\oq{zEhȄ6's?-^/Xlo^P,#oI2j WjKkX3֑` #d)[o\k|>=67 uU]5w_^oWgG=/vݐS]:#w]3pl}ˮ99=?x)>?85?oqI]umE˻ 㸁JZ eWy!6bjG*phH\a\-lQR_/]UPyCxǻۓY$M*)H. l[W>Φ-7YuU7Y?'-ezEglk!Ÿqw]JH׹ Wenꦁ:`SqE=YMM[O:VĶ}TA aF PkCߋu3FEc4v4}<gV򕅦sU n`1RS`?܍T@}%}m^5;꺗NSۦ2~l/~w *B-k84$&i|5|P2 { ]Akj/'BY_.{xn@$Y (!l|ڛ)r=mtAޅn'ĺ?[P=vr+RC]æNښ9UO{:>G8z!c Ipvc  M *PL S]@zz? E`$ bq}\ n A!3{CdUeMΗ!WDJ+3!b#F4DL,m&ԈJi?X#Dn([E'$OFޢN;m=.sv&`s$·@ $lmKU9dR=ZjS (B`}q#NW{Fu# yuXsE1eNl=~ϴ D{tW˯zZ06M;YH`\ ?10*UZlHnxe9q֖EEޣG+ZBX&Ar9!ʘ+cJe<1`',GdwAս) Az_I$ɩby?O*$b^a[#X6ŧO矫V/WWWGpY̳S[ǟ.ޝAOݒ9Z\pqn/IV6 l R ob!6h ZN.ұ7{hƩE%l Z 7mHͪ #$kT<:e[xDȑsA DoŸc2ؖ$pqpDz(yk<b[F֬c(PE8$'[rokg7NϣԃeWw^)[|[u)j{7٩>Ǡd>'+w/ Ơ0`=>u /1|.x<\.z·N ,Xt.j-Pcdc}Ę|y扰%K{,e۱%n-r81'@$R蹖-R1uTHZxzbhib:H> #5X֫4Cs}VS8F. %LȆ(KG*,D!Jb~uOg6d!PomB$%X[7ƾU Ln޸ O%ט=FֺBZKg  ] , ,,d+yumJ5d(y6)@ Phˆ uu.P94b#vg!}ԬU%='h8!H.U,qIa\gP1IE&Q7Ս(zbEnݿO_CC k\c'z}snqqs}yZr;܉%9N}ocQ]uv__vdq\.v>腽;ϱ~}߻;#H:$&M| Ba/R[yC^V/'mr Vb4ϷlhnFuͿHMjJaA6-&XR!K3)2{?*}]*z)s˞]ތ %1};24$nP^{Y/|gci0r= 4D (Ldr8"((Om$9Pqhʾ~ ˴ ?OOQ`J[-g}`jC:Ss7MzQmKU)۶:rSFL:rau`?QXK]EALG =VO)~r!KfI?_~Y.բQe75qڱ? ?Pʹc3|`[CYp)Z:VREvz|9rKM)0U?WњeTG}9Z T bH!/R%UE>=(Sx/1b.F7]T i%l~=΋.2|cqUFVyVc(eLq*)A{0Kj@2|ٗ/ x?uE{=Bu=YFFcIEj!Ϳ}l~Q/Y.EHM~%㽫||ݎv]pfMկY2X\u~Hs_}U}Vq?'׋.@4oδ]>6jr/Um&/ӫ:$TF+.}Kml{"l!+~YҗlbA3#h=~ 37Fi2+w1>׎Qf_ LhWdhW~jk 7*I^_r;!ivx49q qX{j*}=# |v>]]sk$` n^ jo, 1IFz$CL.yQ%eߖ\]MpnY$zNk5×lHjd."y5gXx$y%qdW; lg d:shFh!m| k }9$<\gGЗ+l\NhE2Ma \yO9`.4-l|$0;p T41g ]Oy9,jq陴fLa)l16V buOT8"t{upc[yr[kɷiR@(.cT!Q)SepF3\4[oͶ=8-N|12R*^<,]tbn(MJ)k8H,6ц*MlS^?P Xi%9r}y0Dozo.%LnoWV8q eO}bS%(*m9%h_ҟwn|_5}ӟL۵7 mdzu?[%}.Ų#b: :\//]QD{{d6K6)>DLM4F6`5d+v4ԇUMkFosl!vO4qjIfI,FKܞ}wX_/E1Jm׃PGGS;+kM ߏrbrYùq./r'ۑO{՛}lޕ:JqbFFsxNp3?>]ӡ8h]V Dzn˪.OLY:$diǮyYz6iMAGŧu D)%]`5򘝈bwQ(FH-$r( E24\DF)D@cBnMܙa4D!d6ƀ- (VE"wKS=><\,aa׺MgnV[gkXޚo`f>ZAUlEOhЊV}> S}7a5# @"eAb~m@Mˤ,L\Fi)ioH( Rf, 8kjoɗ| ڼDu[BJ:CY^]lէL5W$^/I忋rE%/]h͎l7{m!,:u}o}GR0RhZ2 Zkޔ׼ڠ _ÑnW^q|\:6Pafg/.t;SODN]ҝ2™#f3] kMgW=O*`ruDƱعb~֊SM Vee7].y1yWm~XզTM Jd B 0qh.?)hKnva4z!hv. TC y ,G12KDVΌ>@ɮreƛל=5ƭpE O(͙i\ n. 66I<.lpݻetnCNQޯѐߩ8JQnu9JWeNn ﻚ6m瞵dnC6Li /D1$G2-;!b*FtN BxV y7]51Bs`"X-Z$/ʞn rEOm8Q1޿<Ϯ#A=jr35urY_Mo"_$̕>mPZ!:lt-/{ajkT7gg ۙN" VG5 1F :O@=A\y.5ܨ_ʮ򗽪Sczg%pFp'<6qZv?-1g.70㔿\?sc77ꛞX ?`2>N9%&טQhڐG iG4eɅi?4H`{d0L\n܀Sæ{C/@nMIL&@Hc:V(']L)NFyGHgv Tޠݴ}va0RŃdƴ~I$]r@ql B\s0+ڍ"a`dL:.4(x⑋6y'T 0,"NiaarNIr9@J>7I$Ef7Oـ0r= GŤ8!Cg*a5g-@^ƥk8}H$BZS1?/OiF@ΙZR\r($cB@Q 2ї($+B(H"4(-AR읷x@̀E @1ƶ0P1$bOf8x܂'9hHW8iV*|_V@Z>I%d;U߮+]2K7R$E\0hdIQH) ψ!pRLg4IMjd?QݽHN1gw 7cN qI'$NNZmN>KbL*!9 z@mBsjmcA6ˋSD!<*G1JK^PX~ DK_T:qȫDtL`r !e-<=Zfdft7\Zd7xf|AA/iԍ% x̡CCc{ijZ8<[iZ(&'ШIoTT WQOK~CZIdQt8 Lӊ3FM$\QuިB#!k!Vi7* Qzs116H0 m\ Sx]8&-Kajh\ 3r O~M!L >Zue ygobC3hZN1a:LvK XtJCSxQzt!&G{ۊn`Qooz&d } cWXԘXD'"&4h1MٶE qL95Zt8_i7nBAIu =АIRŸxoBO&!4;^CZ ëP՛PǨoGQ&2Sce nƴl҆$jhOIRRay7)!b*TN o"L\h9:LZ` ' (O8JuI=c -WlY3XicΏ^Z7ߘ`~c\oVy -81cYbABj*<|$֐ػY5O >iՍ^*Z]4,\ ƣva3/ge/aml%"@o [>X.p2 znX_VNsX)jwnhw qҝb[%_ZX5=e fkK//w-YzN 'ȻfyumI|vb{7p Ǻ=[÷4Fw~σ9MG|1gh}8,~[nv둴wbYzR7۝˳z]N^/в;, K,FO˹ه&ȹ%Hȴ駟AN1z v qc޻I(ίnth%:sۓW׻/H6\az'Y݆tp΀é6 r8dzpx# {?NO3w`~n6j0RֻĽf#ozj8w_&T7 lj#q1< q~FLo3p53+7J@g}F'~Z!2a~6[H*x 2YJ2NEdbr-1PE>)i5y+[9Z4rJMH+hGq]0gr.%P5IDCM8׊).椼TQ̈́ΐ6 g?UțZ A П.r4FjJ&Bdim)xjJ?E)1 #ܜmZb}(:!Nl!S( tCWզ]Du&MMII͏WO Q, bhcEVUBȁ ܒha8"2TbpTQy/NFzds.(q-dHdKR.ͅ zuybDsҍVɭ>v2q,(d#B'$/HҠG!!V;OBG (j DmiZor5@`@S:JEXIBKG7btX|~Z,gG\#~d6G/!9dQEфh))KJFHSEyrmҒKhBi9XebT:)&C6c- PMK@x,9a0hK(ۆ+ 0w9;^%_prAOCa ZϚD1K+tDl>ThHH(E;>q$DR&J0T%Ĭ,$)ԕR` HhASd+l3I&x!mC,P4gC` $K7k˦@jQjtw2ޡLCL%}1qZʊ[i2j aNBj2ܴuȽ> cA"t,9c&B"XOxpYGƵ[\+i(b0) sSt`G+($ļh)#4J:7yK9BmVk&2X34)E)6#AК>&̭*E$M<"`!0=BTr g(,fLM;G+P*ًm$*ϔWYC9*0*xBQBI=EM 4QM,YàZk^ZIH;[-([{;R2tCe,j2B@u5J*AU,A (eeB 4P|'4nuEc\NE=J9:shblk="Y-4@QP`" e;o$i#SY:| F@ucŒ*m\`bs1*bS оBhSxy)Y'ymJjc,'4"5D54)X _P\be (f iy_RUCR J lP#DN^7ӻj}zrZ? ej 7 )ּr|x鞉Qy !/8?CP@~E'1C!Cp `PpohPRu?wŕo?rq`oP@DC}~(/PPPPPPPPPPPPPPPPPPPPPPP} Yo89&o1 ~Е>v>l߂L!^7b.܌jy8?-p6}Azză(Xz$X 3VzN{ `rCiY 䁀5/MX % wcD< -CG*pkE]^x^&w'D~`_'$~#"-#zrqYwU0֮b}vI luXWS?ﵿ u፹/oo,Vz|wo [ɾ}jFAzUl-"[J)ks!/HtIp Y{"l^[B#z +mHExa2"2hz5f10/ F݄uAR{ܘ??/I$E*c}dQ23{YU. ~D̩g)\ X^.i}WREHן?y޼=?O2n>Y:~yVD*3Yɇ %bGC{nJ>vhxBawi4w_^7EYJH e4/}tcQ֓Quq Y?.~SӼS߱1jk@V'n? n8H;LyOPǽD4Z Av1"f?/OYMK׷".8쟋7G\7:w*?f˓ޏ@-@9;իO,l,SU~۸sY+Ki7(;m u,K?@. jW?C8gN^_F>GL4ymg9*[SDL3G3\zHv.`Xzl慁|0<d?l%{)&v!6\FgFrv~n2:>[$ouMKgo}|JS8npZ](B3ܬNoYX [gkwqGLԜ5r{ow;1~&`E{3 `l4{Ӗ` Ffr+eK@x{#3kb:J&\z1DRۃ *:l[- >D\zɪ X! XKٱ0L:% X;J X Xgmsao\z6X2v, uP[];^lkvlqi) "gV|}jK5A>ɍFD>j  v5ц߸s;$=-{K QO8 瘐#tޖ)r8}ҙ:!ˁ^ Y,]t;6oDzQ0|1\z]k@$:gn>_6msXXg(T&[9-!C_Gщ?70W|.Y74:6 2fCG_w#wdkLU>^Ǐ%m>.,Q*DrzAL —EwρY);_0W9>GiZaZxkFŖޣ$9j1%pIE(X^w&oxmjj&n5)%d&[%ffbBKΒ j~j >Y\fr6Hfƹ^5%s ݷ1 8P$F21%YKqՓ1Tkܿ/1v<1A`A; Kձs8crХc(mӜ&9}!WHw ry~jbtLU@D!PK }[tDd6K9'\* -F1:x-cEZ)R^kfEjw8 Z0A <#!1VXwv}V'ͬhQCFHy#IbD!15;MFDD7u^BEkW0xW ɀXW:R29IFJκj(OH#أz i<]n1cC mt#$gVbދOT9+zu{&PSdKfp"c31E" OfB=ch^7>`~7gGٶ );ƴRƪ*=XW䊁 5c\8up|hAd] c*0"1 k@/]a>Sq m1{U!QW J %B"N9xum &[;|Z`=DX מ͍!ZqlCQP_ﶣP]fVTxȷddF^>z7d0ejKHgu˵eTGJ} lhѽfXHzkVQLYLʩ8eȨD|Zt^#t6OfĆ'Xp)ns/=t )](cc[bv2< JTkRV A&ǐ}lùL…Ip4W9c(IpEc;f3&@uwTIE+ɔK mP(cNqP^݌nnGcyH;ɋWcPlA`h. $.<QҚA#3`qPUJ$/7iW}ט*ZyM z![5[vE q`oi\!^PcNdͤca@&3%:ESHnT HEX8R5H2`(k{@agqPΗµ#j1h , R 9! <1M_;&"A{u+ &6;ga3ĸ+(.=:.ftfZ{8_!0D֭wcbYd"1D*s%?"-zޔX>]us <Q'yMJd%,'645D94)XQ/#ڿ ՠ$ EJ2*H @,J^xq: )Bv~hŸi\o`ry\΃oӺڄ_svWrn皤:G*΅1H"MF xWzқve=eW>K^kMo˟g[z]FwοK]/v}߯ܝj3YW//. W g^W]39 cb֋׻ U}RG aq4!{m&aNLEsW0wH6tNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNWtNW  kk}ذCw[!gW[`h}5[>p!#lV*|y<'4coZR/ !QZ_j IvIdZC5 '띮DJj/:ʓ{.Z(;dٴkiwoj{y]|F5u~3&ްט[k/̈́f8kTS .MԫO_b|T:&-YcLv;"3<{NN(; |d=JIr"`v2`ULQ{16TZ[d~2`==]?nWA9Dtlx*޼҈!frv::{ H-Y22,;qҾ?XiD*뼞 Xm,h"` NX^9uBNw)*h"`AɉNy'V9-XQSkv&L,HS`WM s@2S{'XiӚ 睟JJ\v*-=y2kpb*-k=O,\QS`ޫbk#N0"O|AHe&6LN`Aj&:C@R?[3x.XHЉ{SlK/: 괋I{0 ~ i ׶zAAщ `EPˁ+6'r*fѧ]qXx㉀۲ON#)Cȃ J}Zs1$={y 'TTY^x1 tG)m+'VlXMT NL%T:J!wt@:8ܚ=DƐXADJbk*- M&/O+8,W29Z ĬkN N,3T pRvO~ӧaB~"`JUB{ X-Un"`yl7VX{:aO`pVM%(ʼnn)浝 X!:l\U\Nz:6k$=v nvt4rsW_痗M@;wƳO߄3l׸>< }\P{ΰ@nebU=\9xgWg-_G\UqWfj\7%I{N\Pnb5j4Y:۶$>6]γrۯmNrɷ?w먻Ogno͎hf_[٬hK~<u\7wNp6>~7eQv3QlY7~Ρ {Q12tf?xyq]o"_ '^nj[j[&Җ:cߎ\]qeϵ~ ޅ?1 )>6Xwpxq6w3zj[8G[z?+}Y>6+*p$(?0Fhq< MEM3R>ҙw@?ߏ+Cy*n>\֊4IAd:+_EֹjT\2PE5V/DrVMpԼ\ 3ZbP6)g0y#"IdIB֣V Cwͫ]IvM0_0:Q;J79I/dif&%'ksT$|V! i3OZTu 8Zq( cJ>/wrHO+[UNi4*EN$ZHZJj g\*Z(ZkR7\Oy<4vJ5;T bP4P,!裦-#,jQ*&SJĝ@;AŒHٙrTFk"$gJ*͆l5Z=MτpŜTF d0 W[0 GG-i _7F`jl#HaJplYsQ-.A!CsJ2%HG \KuX-M&#E)"-ȣO&)'I*$6д?\T"ЂYpPbsT"ڣìhp fS#Wh` 6O-e>%xhn8:ڗǚΥ A#Rn"*/$i:jÔLV ٢V,ϣd*S։H)/4LeTcˠ\m+-E${[6R2 3$.K \"K}\aB( K9 .$.iD:@)Xhj¸xNR89rMk8=E$o! jqLCJ\E 602)cIZނjV*p  UVX'[FFƙzg VV56ҡ&p`7#7pf@ܒ%֫M[`ڼUkGK$Ukt0p]rH ,AqQW0emG^iNhSk S`JGϳqw@i23c ~/C( 9J*ٚ`eQ$$L}RrBQ.5fʎ[!\ha- K TW;"R 6deu I'|R &0< G $|*S-on<(ZqT.[*AVB ɰL{e.؈.(54#-W"X%8;*Z"n-)}p` 20,v ` #&h.mh622(kTNꂑ2rd5! vVKw /hͼ@|x/uGuRP"8z% @jRvb6 ,h$TEp ^'BɔRV䒐">mIE0ah`ooplfp6AݴˤFd&Bْ,zcE>]]]]O5Wh& (@3cCkF@˗{.]Xc"f}n+U tB̆qL[8kZ0b$M r'aJFRo AONL6`jb,S3;Á; vͱ$)Ap8Zm"o`|Ib&0oIA u!_( ]|k V "X-5I#:$w:BRu~Dj~H&1X$aʲ;fcbPT\a-`IASA]HAL3iSQp J5UC_ &7v=X(+;l[ B8C (I!) + B `!xmB$F0 cKb:B0cpEL. <8U@]4H֛.˵_="/c)c?gy򪠽4R5)x5R9e0dC2p>5B v֑fi BIo}T tmJXxH: Sʧ^>}d8:J\ˆIGZqd̉fs(63FR1Dh\a!"!}6=c{>g:î'ΏMhS\<^-4.DCrX~t"?fއ(I7>L`g; $;e2-{S3|BĥkԁEԁ(vp SJrL__[dL L L L L L L L L L L L L L L L L L L L L L LԹ\Ќ.1uSG-N ^ԩ~S-2u:::::::::::::::::::::::sL%$Sb:yL$ gN`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`N`1u^Co_3 e]3}-xb>Կue_\ᥓ7u)T )mѶN* !XGjJϫjKfS,uGbD X#`cєt,C1qAqE ޙu,%xѲqX9tOw=RG:7j^po/h <ޡG2u&[0kuGbG`O¦K1a\v,#ݕ:Vbť} ,ĽO fdSWý-z,M2[kS"-_d1r%5Jva0) #en7q~X^1[yգu&אM+8x8΋o ~?gvIV =FYg )0m}9(absiC'Xh -\5Kԭ7𧫏]]UZUwk߿SDL x3X!F]Z, lXG X:oXF|0a`5ēn:VxWZ|0۲xKnGbyvK@QJ>#=XF8`9QuE QueIA(XMY-kD #`1%%r%;R5`w]1]JAQGzZbn B+`!gۃ# 0ҕ~{+`cBvd\a ҲI%Κؖ`5bz`/5t2r>EG ?x*_J՝**B@Dy'Lݤ:b};Uf7M;U,RcvQ0q>8^32b} \^))\N&=RiL*u/k''\M"<ܓf"/}( S"(RHSD}_A;KK9[!Kp1U+nhwnKQK&--ޛd?׾$6/+Xa2.2x Ř[wz_(ܾs˽6 "vsM-G׬e]-enK=F`Kߎ-k٥5M!;+ʣUJ&͕ -F|oJ}ϔ>j]|I؄719 t@V/ R&_&V8>~OztO0\WYQm#G1(.?fb/2fEmifl+&q#--9rUk[],#ەCsEv.(}Rʫ}+?fQc$¯ͽOh%?*TL'TGSOL`6Z /o@6X h}TC"((}ͺB$e,屍dחZLP(x'F;>%Z"jx(*Gu%d6οTxMIJk%՜qzzy^D{Ox.qڽ_d-83χEN]uۼLMTGD 1U)QXCmWbDϮK\ إ)pB&RHQ֊PFMR{]-Vg)att5s(QfM=6@?Vw3ޫKax:-G)Y`QM줽R{y^70ҍc˱42*a\>0X_nFK2s?o)cc"?R9qmeLkk/ltLZ;q X  !=m 㩈mĄI@ܘ36KG}[l!DzTRa"yQeݚ~'b/^-$bͣ+kdjX^Uab&QͶYI%1oL5VϧiTm5X'9h'֭Klh\3} qƸȫ=3jPN턠itdߘ:Ĺ~kosJ'OLt}=! 0z8ahnSԒEiK"#ٔƱUչՓO•3r.+X7 'WwyݛCn_[-/>OgR=p݇7l p}c>_)tcV%=.=_4Js+MW~ߙ//6Y4U޿ow+L+l̴ݷ]"JQzk*Ie{|}F?ϓqȮQ}/W10FDQF"0JNIb1JkIVQnplSig2-u&Okwa*lXkY]F5>adOQ˓*X]nyAby#_ #g<8k$$4h=H#9dz}x: C׃L:bB(N$xr$.x:߉3_ޫ!$邯%>5J2ՉF#R Ds*p#x NK_g`99jeV /k+`񉗽UVχr~C{9_JR%W-`0'!۶zxPlu($Yگ`qpLY1#fcie0&ԄplE8anqˈԎ0O# 9{V7d\wnUA,IlI)@$rl )L1ɐR[S.;{x ;9ݰ_k#zzҞmzUZưt\ۛ"*9MXt][w8kH\c1_Uߵdbkã[?RQ24ٺc/DeQy1(ֈsv +^C(ZEo/:q! OEc6 ."#,ǫDv899E8K?yr<53!BT${N#L]gsO.trViƷ۰{gOiz :ơs9( *ǼT9H]ئN#MbBѡgʚ不_aip~c/އ]mÁSg8PGuQ3Y]SՐaDuH|#"2L{8:e8,z9&ŎO迓fxhmg6O غ#14}(K`=-~=>= /\vAQ "I!^DS W ω,N7?^Tpcm*ȮU")[dcd)I s"KBpEljrT̐E>RMzw.[3\;Sÿ6)<9=9Ͽݻݽ"{i15@Ĺ 7~-DʹAW3p=LvNkN]7p_+`; ZS 9:e|,i w=8(򱊜qPo f3b?uVt /r lUe탍,{8gLU1oSf_W0Ӻ6{hzꡈV`LY>~jr`u "'Bp[0dH(t5PWʡX#$hN=I,2/V*|PFQ:欌$( `.LRSGrY4'B J kr_װb4j UJ!Q!cԠ*z؉ã#rHw WMM {ֈh3hŬLpFQ]3E>VbB-=kcr<Ѫ G0`6;{7zE>R̜Mں4uxY{C*6\O0l{cC9}P?y*u;![+vl巰k&2JJahY 19c,陣\]ͅ" Sƣ/Nc 4Wp'J4Dš)"51"+N9Wiq s1Po%A+ 809K-AbHZ>[dq ^;dTlapRNVAIiN6#؉ã#rXOx0:Bő*s!g5S0cwV [V ϊ|F|>_d-pnE\]DUJ [w gr = ]]>CwϮUTPaR`|:nq 0Z3C3CL!\AXZ[chc6 ʁoRCЎF.E5gf|/pX/.0U٩;5Е8nU^\ ;AduV %g7GjXqZ1Hw+Q pnJJ4Ef) c@ H! $zS* \+81go .1>C |5sf'!3^MGͫ ~%3*yI8\qD;a9' CqWY?-L>αA댞2qqk;Gs!fۻ0ĽW̿ƪp!45v51󵫉"F?8ER AYO=׆S$csa9 {βN?))|l9[bIF xttP,g9CC|{F`RL yާ-c|mK|߆zEDc33 ]Tjw5O)w1a`VD$;poE\g\qIC4z׫K;gs![󯖞{jwR;> RM ~l: z;*/񫧇]ǏwMj[ H3)'dK;h&w+aYlQac~n Y^Rg1<:"ǑӦr^Bi\ރ@&[VRINQcŬg\y@k{Zn+t6h8d]qI&hZN|Vf 51mQLqL+3GVF躕QcE>aΣ7,2a , "],<ݕu13#_X"?_uζ$+VtD yYocJn{U(53@Ƈ5Q^q Y:LryC鏞9*V pԌMeG9*gK(jl"_շd3G%d9%B62%(@braN/0 3QcyMASUD7뒃VX h1K?Ō&NJVVotP?`s=_@A_耊 515C |5*Ԕˑ5TDD>;,S( 39OQw.ՐDa9*2*. ^|NN&1͖x }K}3NNJ|B/ B'v:u2 +8>%BiFєH\zsN$FJE>Rڂکrz,sO5~'(oIü b6PO—eZ DSU¸:*\JU8ty'O+RdC=$(OQgA5z/ZUN |`_rAe;IMn=}y?[9ÿf͔yHx#_qQJt.J)<)BIeTi#'h@ɥL yr{{0E{ay}@&Q"eStDXM`AK0!| b#xt(qH ; P(X3UhnjD=W /ILh9&I5f^t9*֊OLJOrIQQG+.Հ Tc=Qa0 lzoONtLў9Zo("a XgL #vuYHgzZڹژ0ڜaZ wM6/LW8Yws!.xaʦo) pz+fpRUdޅ_UU -EV#GVcoϸͷEz` ϸʊ1cK^C7*5tz>]vQ:欌$@yOld$㌔\/<8S+4-Ya1@h}{ XL|O~l}YpQNw=]WX7?ՅBB^*]7j,DUJ.ĥJS#I|TA]w=|P:~&`R(ض,|X^P!'{>[%_oym|b\7 E>V̐W+k2N.GEtHI FRө l.xSRg7G7|00>،BAMOWlLX.k<T᝞9*.c^g@̸VF0x=}_ɗ!%'/YJRheÝY$ 7^UEzg; C>Rr$[=F113*Ro7ЗwZX<,WxЇjy~FZ%u7o2Zxxo59vFZ("1F.rL39*nl^g pLpY 3XLg.buGXѺzr.70 \H Dhj;MD &QG L/x9Gø𢡊z55&hjw`_3\S+陣iѭi")rk+0_ ,=d gSbP&D0oD5S+*|0q׺P4HFE:眞l:b}PCOS Y<յlOW5PXQS+:xU~~ Ғ(^릚"֊.y< | `Ɓt'lB/~it ͵w 7  ,xMEWXv.\ɦ#B+l9&FȘ1]'|Hb4W &Y[`k?`0Vkl+i=N|T1fsl;Xd]m`HM-utuuOUuϜχ.WٮXs>4&%x6,[xi"pz'Rj Qʲ|uMcդ{>Ubb`Y9M2ZmXSYǎSA+i Ҋ`q%9?jJ*NjHji|ÈYeFO+riRr=?哺G*c(Kg-y;",tFC3N>߭6*?Zĺ4]YY F^W( |*%m+&5'ݯ6n[{cD::.jO;Kk#J`96u?>7]S98]&jO TH#/N%`EO.eb0_C(l|{>}oicrp560הpmUpk<.:8rD3v.*} 7X6+Čz1cy q5gl\^p_c0iqBolΗO TmU_ pwT1~w$-.]5`3,*( \iuU朧 e|*%Ϗ͚&Fʤ-* p 9*``u_O&}ھ=pG64zN**LNuu9'o#"lr)-6alΒOu0m.蜃 x[h9fzNpa T[L hdƉ='ΘM[=l:P W3NtF?96!y| U CF?h'"E}" ĊUW6'Qdf7*49ǛoIWC^1p;@#[ED>쁩qQ./ҍQ!WF U>5Q[ǭ,T'Ɋ hs8M(,2rSUUx!RΔQWt86_f1ު՞W%"85D'5BU./T 5#JmGO?b9D= q ]74E l3}Gmn8@[VtTZ^S.T)E ${ ub%ha v+s]_}'V.t83zVkׁox%f"|GHn(B4B?5uFsPBfj;__`I7F @-ۂ^*1_}q2S>GWIS ó1_46 s-1ùM–2k-9@#1=vD*ƈ G ɲ*Jԕ_~{ ,`W%!>'rwv!m19ܭI+V>cqݮ|n0yM{9%9ˑzP+ʺ3.P*4O76#޳5+JC( +fK\̮KQJ.R m%\GzZ ^_\LzU~϶QNPwl=!/ziEZ& A Ѻ=(EJ#)gڠ΀`o"ݩ5[;Y us9m`NelΡNg`vN>3\t؉ԔaXrWGaC jK;{=hf!4d%Y -A:w u]㰻@'n/c wȔ/T'5 Vr(<5#J'7as_S0+n3}FZӆTZJ܀n}ǭI{ޔ4 mdˁe-rvI:<$ (oJRS ɞ  jCY1ZpIwWxbp$H 5*!eG=:)bQfL~2);x ' biY,dH>> QD(bхe,!Qu3*"EKAHTK/yF?c !Gr 09aHT]ZP/ĚΈ͢J%,zFW?ƥN;^"RsІ0Sy Xffd̐Ͱo8M MƁO. 1b7}pWOl|c3|*y~VTGgԓҤlz#liY‡U]E4up]TVʁczC&g!SvMXՆi|iG6vie,ac+&Rv(xM:&TweP/"2N;[놓-A`i0Y )p2PK,[BJȜf(#wee6dz4eOJwa].WUůV!N* c|p<&MR ԬnGk30Ygc'S-opqWSmhk~ 2jhdlj{ml悲i)0Ų$>iad8۷gTjl}x&ۗG-:Aւt*kKƫ,!7dQj6:Ix%ƨ#ΜU ,3UOyxb';q|;wl/b(企y/|n5*#^ hQSYnռP?t8" 5tDV|>mۑʅvpۨ$N Ǡ1>bݷa`#U ?9R6oj(RTi2JԕRTHm3>XOl|c+|*[twܕ*–Dxoԑ kF̑F{W"pJTt^cgy,My"B8K)ᾢpu`H]r9 ~//Q}C%xAׯO? o ݒgnjSxnTD*rz[b GzN/ʪښU\%̧=rɴij\?r{V# P,>̚68ߑvc1ed`G.#Xݮ2 d4|D (B,$T1xlm[S~˵&J#PwSayZP`?v+wZ|?7\`F"pPTh);yyc0*ZQ$0^ \n۾@Od1LxU7 DjcD3@|(6I JwM \g7qܮ^dx=2jC=f'2ۊ6{I΄?n'nj kbW_cka5#h3.0dN@4"m@8*xq#TA*(,,ɡKxK>18z,ǁOd9c<,׏yXm~x0uAa5n(ipIBr#}'/ۧٽy=<Zʒ:k1k"y=HO}8JP:.Qj^^%JWa864kkJҶxz= vDY}(^+Q0z~aH獀Zܿ6!ietlerp<8wnI |6/U j/o_vb=k6Ehpwذ[l`7b }MP"itWRے~rg;A*XU&ezm(`f QI,KS1Hp}e?둻07ss Fݷ+mVvK3kWPl~Z?,"/ x[Xv!|c Հc; {kO\"1?n|U臕ʋG# #@G?ڏ?ya6=G  8 `Ͳ nk]!X%OVF|A97keX+ˣn*,[_* vjA,ڄ2Q" ŪA$(Gcm7BȩI>$]}~}5oOUVfսU}^Y9j6wty`+_Wq"TEL1mD0B (,kKIĢ,DNlE}*b>|!g K|M?kW 56S(lۺ)[E~~Q:-~¶]jk\,O 6׾ǃ7 v60)Ӌ좜m܎^vBu;zUwYv;X&TT\l90Z~˜PSCU`z{ܼٛi ',QfeJd*s"09x*REi2A\^Bnm\ Om7;,> V@ 8E5T~7>Nd䲲$s@2M0-MBGbbz Bdbg-?TL#KU{ڱQR/m6ϒGB%8+ LHoL˝.hnF֛MMD $-Ij`f82MLL$4F`yq`̇0{v,WFĆufxd|Y^@;`݄&w2E5UsuQ^;/k*%%7W3sfWOM@=YQۉ'/:Ū60XF瘉ss7j*=iƞZ}td˽ɯ%_=vW,uʘLb{q$ۛLb<6Y0+/&8UMk+w"`7YyA\cw)vv0uY.ˋ ŗc /XpX؁4,̠LC̤ܤEL{~Vb$,M %*!i4QeRi hxqOu(ma h4Gi Dd@Fɕ) CoN;r&c+wi3GKx:_̗TڦUѸӁx/̉bLm&qbbĖ$FId %rm:wOj45|] Z1a! ֈ dHj bj8"ȁT_kVTm.9/"c )0Ueay =p)"CO ۈqm럜FOvSV6$ Ӓsŗ(Pw(./Cmv1YCz_mx[XD|~U5=b0˛ K S`o?Uп5w΃`eLJ*ltY1JÃ!?H~4T0 Xu9& w6>lv$2!|3[f?jW !]z53^0 4, <(M`-Xd,Of83sK)I%" -U DGZRnHFS]V9`봝 3]*8/% 2Φ= 3].nn2~qC=8:Ua ڞMN~')@4<`F܅~KQN};[7'`ϫnrf; o=tοm0R̼8ݝob_I9cB؁mU>UOζn0 ,ESʯ.)J2ex$Iwϟo~u>-^ &in S,[ /s{YU ͤ/Eb D}v1 Cu!g,bܟ봈eV6iփ( XjMw/D'Y< \B zr(~޽S]3Bcb'-e%4YHa! 0Y$HHj/ U.co{h{7G |]>MCb$~O(=20I1MEb`m?=as>85W(L Se&H) Eab8αG5E^oovް xS1L 1"XEdZ1$8E:NBehr+63]ѣ$%pn0Mg NhP ZDR&3W"Os +V(uBKX1 GNJbo%Ic$$da,A@,pKp>R?IbASm{5յރmdL!D0y9Ϲo5SpDPw 'gc{N l,:)¶p>_]{%(=ڢ2Jc : |Pab\4ƻ=tV5tuUdLbRNSӸ.L$LᏋN>< sieng {3H4d3N>H$N<3 3[[c9-"$E\HB"i.9!ā;2bWNxQ$cc3'gQLgO$,χ]`/?>ERwjel/نQ{}OO\T +1fU+0;0[P'`VmpV};g5Ȼ_T:l]EpԻWv,?.7Vhw=ē_5W|j1sUvꅭn׬ȉTWmꑏ5"MpBb%DhFpl-nP+ "2I1K) $I$3<ÃɋcغƥTx4**7f<潃IOQ61C;դyKrArRJmZhXα;D9LC۫ >Y'4N7dTjuwBbʧOfOO95|ogSy;6wyd7|y_p`9;3JoCbց]J3^fe/}͕?OvsZYk3k4fٚQWiXMLIӧZQ]ϐ?PJ1g^?J_ModS ¿?&\??:l4}B[i{6 o&S,nׂX(1"#<rhyYGd|AR #U-P/K㼏_`jyOXO) Y5A .XQ_ەvFH-&r/)#Y/^v@VV+?7YR>DéЬJz厅Ϛ*vOktT;vy%9fY#=~W"Ez;@)di4LrW;LpzHi`DBa@d@>@d6ǩFK?M{Ig}xykt~t=Y۹?jM"yk\G ZGͽ{WsBȞ5<~G3J M@BRO=IgO:cZ%vNѪ Xc% pЉF8L)ZM6OÏq*l,g-(&PnchY8 H@>M?fɗ<@&/s׳61C}>~zzW\^- :(:NİLr,@1JkA7bG?_9^WևGѯqxϥ9s+ ިh'ղxgtݰzNdD,(N:&SmS 8=8H߷x8&}Pk+y9[.n]-zE9"=T_z=krw*![xV/OMe>ȒY޲TV~ϖ^ca?(3IsL[85H݃hoΊo7R2٢k`무|$VfLn5*DE qĊH :~c\$j{=;_Lp ҁ ʝeD`BBـu.x3R!k A}?'YgDFcpT~K04"18oE3ӫuCGa i 1%;<9[m! f?3ndaڜ&G"pLhbN?<!Lӷ=?AQ$'aaiq;RLbDyda#!O-6Akߡvi>??8uf]2Jm9$G#G`%hDFcpt}r|#@HL E8Rl%p x1#+&Z Lc`cim"L;|`kj(I)6I!T1qDo ?pl Y^aCw1.h 5Z@t4K$eI:++7&16^7zWk^m לjc jpU ET!,CQ1{7VEȳQE=B^NE#ҷNd}fG\-Ke$Gd4N#4@mmB#r&~wո~҅\,kz9"18?WVA(3qp A@(ZFSh#($OAhps/l7rV1- |hDFcp lhgD4_gr#2#w]ۂ,2"18rܼyzsj-.e \5P #z h KwW;~wO LeU[Pߚ+^2& h ֏_ 2"18L.PLт#"y*tn]琍&H,R 5\r߃\`QƠW('QYS3p\$ hVҸ7 (f;/MJ -wLdsY!&TU9y>1㾅 B92#23O~q1)o'#Q>2c4X ZCgu+18~FQ~zN Y@VM0_ YVUTi@pgWC߲xh u:+?xۮOhȉzސ܌ֿȩh NFp{aVbo`j/򻥲jٯmB1Z2}zl/[a,[Nk4Fw4E .(T (ԯ\Pb%vbn)(n+185\5{DFcptRY Gd,.,JPl٥p>?#llGd6ƯB:|/l;3 : ăSL+ʔR4mx@[\i4GZ4;M,҉TY$KW6%9M.wvgvggfR<vr%Iaム.&*aCi0e72cgp:ӆMDC1 @!>|mQ`MJc!nQT $`H*O|@C N5g4h4Uy%8 Pl=b5٥xAwIEy&\oC@L7K/Mv A*9I)/2C6rJ\NQG3uĥ9̢"ӏ?OP];T63r;3E zqyMb&4~K46*g,l WRi3rOxT~Q*rRD%聱g(zؽU{C?HJt=J_DL QäщҺ8y¨ۂ>H({n;gudv$-}cKVu `9̣rT1abn+P66rXAd-;r6,BT1f΅pnõ0LOG{%f6/60j6s骻dd qϸC߯*Cd=0Bewtw>/;xĠƞ3@僙{3*YTv]׻7*஭v=0WRv5~4)^qEz¯_ɼ$pftrU$:I͜&L=zm@p3S7?/&'1#T}<ɳ>[s4M|4~Zދ؂}{%FiF x(5()dxk^^oג3FGΨ}>``IR~se?﹫~n],aJReU=uL[[d=2[^@OWۥD)3{`IH$J$,ʋ@nz/*mqjq*%BLpN(mT\I]?{dae}mB7Rco{mL5/_oS|&$O\۰ ~cC;[3$n6{t~/ݾ+'ET5=X;Z'.{yNvr *+0g[r_r2нߎ 钔Q:9?;M<Ol9vL:GsHq+=QD-enu͇p-̍.YU|NAUpq  y1o"#"@ae^9J67H9 >>p* ngC¬,O|^ km,LZt|&hgUz(ө?bAꆳ<\.?o÷|:KCd>mwjFmύVs۬֘NލUR`; s)7ɷx]_m,N }xu{ 4cswkrk[:};: iC3ھ2u=S]dfe""kR!ZKoU=*9ch3)cgY&n n| ̮ `J"m9G|&Gy0EQys̕AK6k2( Z:D)9bȏbprz#eȵ6 x`vP4?=1zrbrIP8VJn9p1-ՙr eԑ<)?_;H_{aE P=ayQ(rT`4tGbl3Pf~Tʋ;7؅@[,ɫe䛦,|'y3Wɂo-rŃ'utR6Փ~kPi>e[2VTrUےNhl#2 c[W%+q2ss8ޛTQؾ`].9m!*y=gJnEUrL@%H;t KƜjQEi~׭'.KZ}?1Z$J;{>$SSsv~P>JPX 19i*n65%4Z{^ ],d朿JԶNɻp\!ܼ~x^]jK~TE[]1^")~3xrǮxꁄ #'X 0I1B-娺irMZP b}Pikˁ -H,]`L|rQU3|0m(t X4u?{_?yǯw5&~c8> .i佇s /O_֨e~\ا2Ɉ?<\@ ą)`ɒ6^Ga#6O`.kE?IvчA |xh5hM4EdY r=p_ښ(8vU}]ݬhu_*E 1D"J78iWfTiE c; |dG7㦾ߤ>?ϩ,78y^! nrfʋ'W0?*<.#\eYn<tOlP^ga)I3U2rX"NЊ̈́ ̀^u1]uNZ6wsHf#W& KTa 4'K@? @jH_ˋ'hyU /WP~?$f'.-߽61k.'#u9ֿһx '3Z))9Z)p<ۜ ] P.p*@)9O88/%pfUducu=q_pa"a~![rtO)b,$څy)C}t٦bWzuJi'I(%dg)634[1] #Ph- d@!V c#u@ѳ;v-<3.H"C-ΨnZÉkBFZ2<ߞ( GMvlTXD 2پ)KU'^*5Zw/廘}[~͚yL0y-ΈQy2)8R2L iME[f Xu^X c1C!9gNm(Y.[Y\,Ƚ[iҒ-lQȶ2'UYs-Zrۑ+qoCPKُ/hDe Қ WeUfvbdWqTN3@th:1pP#Ύɮɮ9w^)q,uI^VIr8԰"b)9LY@E0~9S!Aʙk=z5cσ7cXZ䱤Prl WR"8P!eYΞY'mՍ` c8,VSMF2aP.'PV VwTQB\{'4,)XU Ol}"a_K#E| j=Q=λx@O骯=̺֋-mّHkX8cޡA v0@[*P~0`fW(+(H"L[Зў#ЃUr3U"k˵SeSŅ[&c[{0g.i%I±ueĺZ 8M|~/M"}[&S7p&^,Z)j'<)T;|Ar55$[An(GϫÏNZIQ݂ZB8[ZЉ:Jv J+aq9iSc0$gaf[0<[P$?43¡?aI2*+N4f ,|_c+QCkRh1]GM4'$,YBCk`D*1sgFy \es*p!x 95 l7%C|lJ+mE7YPT&Z莢z{/ 5C_3on>׼ի϶o7Y@[M"kQdk8UӳJ;qu3RL6%T)AOE8Ӯ FӅBiP4W7:5%SWIW`LJcSL+R2/ЫJaJ2T#.S11XyƼ9Uڧʽ>Pҽ&NfnyM_%}OSg܂)T)C N+.W'0wQJ*rg4ܴai<`$]EEOhOwqۓؓ0Ù}c[pվx.`߷`-IPU\n#N%Tn lo-٩//,D=Z.M5n% 0.zmh)s$L%GvKn"ɨZxpZӏm`ddefa!H78= =i4%Z+Mܜd4ZHgythI+JZ@79afX_5֨l}<9 jse q4 AKTdGy4 !F.o@yv|zZ7+²ZCvn\3>Rf55lڴM͙K6Oz"ݽVj?y? k?^goꩄ ͆*V3%e`RR,WÐXmB2~,X|\UgPc*V<N178s!anY* cC ǔK'l 3 8jٳh1cy8gGBgM&KW*; );Z/h]3Q'DM';@a߶U R.NxD3KEMz"%.ZKLe\[Vd鳭nZf[w?>@Xr+ udWXua*o&ʯ]-"ې ppdn6@aPW, _*~*wDž,vBF]4`Ei7k7 3狙`iPF1;1E$x=gA²IYe`\2=;CwYҼlnK P 9YBnCUmǻm `jy ֥ rbP {+?*mJ|:X2jt0-wlƻp fՍס%VHD|OFnY[zE4M GފJVq/JEL5ڇy0Ɖ F/VnͭV_ 4X$&QĐɘLԁF07|Sإi =V詝q/o}nNz`` 3:m%v%юؖt3]xw2ۅyxgI$h|t2X Tfab"CiSJ%vm]!^ѿnS4~#^Ɛh[˃n<{۲ms.[8ǒ%rD"jɯѴ .h~Ь8< yFPR4ߏkEQG7+d!wͷZCK0T\}螧H:d-[}}۫xDpi1OGW%Έ%B zhq r%c>1]>9e)eiI>U9 wNi@Ycg3m_6孧/gOM&Qy[?7vyW\#/ҁ;W3`8lv;p5؛1W7WqU 7s%+6efP K`!0ېBŚqJ"} Ǩnhp\(&eD$Z 0Np|UG`⶜pʨh9V=yqL&H-88Yj-!3S謽.'Q/O ̈4TyyI5CIMK

w3ߍmBwuH, %]"hMsJV/U%sPR5}̚R4u؂glL7bΒK冢Ϩl#H I"*on.jBdr8w .{݌ǽN1 7Bw"p:s9G#iUܠ8DM<46+ڽ*+Rr*tpSŢu_o@9l(̇(yшG5}LVūVMJKe,Z%jz9A37D'+'/ݽo._Ggvq3R:_'+T)`]{йY 7Q$q!Ĥ(Ziõ2;4A@g.:|u _ H{=;qeܸe(J"f1cȧc#>Et#pj8슮rE54 HzM<#282ii[\)[X%'߿ڽ#683cVAŨǪ ui(zi @JYI18yu|DgZI2WhciL4CXbIM$*"#߂,1!)PF02I*#A` {RPpv01RԠԼ'L'`)Z0UjE#TȚndRgDC\ec)2nO8 _'j~a䵰$#  Ϥʗ!F<z+_5(?;'U aD`"򫞀eG?_^ݧϒҿ]݃*~f,7,hUo&ǟ&? נ5l^o@Pl]`uG~ bTK_hĸ%b4j觙ϋ/P'D#BZΥw ?Gszx6lQ30{7Ò[5VH z;Xgj9yp%K͌P_VLgÇW.fZU8Gi? #Վfw(lt Qei͇ylO56ϗ+ʸ NOe4,lpHkLjh"%L?(c:>ѫ,Scjͪ V yxUjK6O| iEs) $)sv" qԣ`IϜTzG#tμL~/yLu-\N0xu(6"~$q;x\O77 oU KpL+! 6 Ϧx/`| f΃pG(6}m4!1 a >1acIbyp }"LWe㾯|諼oQPFg:2iƀ>צ1\dI:RB "tl;eˏ (Q.l&_-֟w,Pt@y2 ;[K5@Oq 3ߟ^_YׄJkO߄|6&֛ƫK9{ Y/ݐJ}+ . gh=|׶Hh! @C>!_6r+#!iA-"8)PQ{A|DQŴpIbe ɔ-1&38Y2ryu^)pD[2QF#q )J2ɱ?EEEb邭QXIDDIFOK+|gru_!aB1w7NŸm[λEg'htZ_dz8? T{xoR #;,h1ut#FHdw9S>:bOqd}5i>:MaMf9,fQA\7*;[eHЍR]ِ a ?l1E@3022O#IR-Nqdq0>2-yL^4G>?Q%qs&6-g ۉ0,skfXIrw?s0D}^z6nMnonpx)&9Fy83X>ȴo7fTo)[-Wo촖wYyr+7W]&ҷ V?~tmRsQ!̡_DIs@r'H"H|A<Df%JrC ;t"gy^(Oۓե79.V Mf [rq]H=6s@///ob2T\,^zM3/;-BԤh)MʕR.2‡#CIZOʅCxnG`hP8|P=ef?= V:^xyS@--#M)x+!2 ԆX)E.ҙx/̐F]I'cF:gg%iw:TXGEve&&6KJ9 4*ЕȌ킏f_&m̹ކޠDf \+0xqїI9RZaJS9.B0ѹ-&f!AV(%* `4}A.d(^JjFf9xKɘŪxOSH^L e>rG2&KHE%ǒx#֔-q2< 4/^i9=ZI Ȭ6\ BqTR@YmJ;M ;ָbeg$c.l ..8Ҧ($)A⎸`$4xPG-A^zI%퐒vvOBۖ, Q7?*r):i9Z~B} ~4r%Щ0SW2N_mPoIT!U.BChIrxb"}jբm58{ŵդ4TK0dR\Z!(e&Ѐvx&TbtZRyiAQ"cJDcRN#sIz͙n8$q\wXrt\4J ۫Ac|s/X+oK\}MlU*2Ž͊giW=' dr XiYˆ4[59|56$IvڂLVъ@ j! a*i^Q2璎+ÃDGMYIk&:Φ0MA)JʣEp(P cν ʠ @1hoq½ \,Lxx^S\ۼk$q(#XEUcLh"3MD=AR1Z94#8x5[cƺV45e9OP6 *ʉ9V4Ӽc1 Gu}N8{K{1!>Bm̰'lweZxϭY\ 1K(.-gv^J gϯdC|*]nF|?N]Obr4?}mrq.OZe4QpQJ S>nbETZ spE8n0J/qdYayiъ&$J#.#I39L*m`ƙQ99ZV$s#FA&ky(M:iڤ{F"DFi\G)!:P/$A#GBмM%뱵)b~ 7hM*IJ$Rom^j+"4zVDټm&DD+ҡ(gZ(vZD'$h/xH3&!++ZqósNX6F5FYY-Nɝ?%5ion8ˢ\REVT! +B_)})h]|׿FE;>SA_.[/h~;0ZܫqcgMi,f7Y\w&V(/!,pev.0G5o-vt%+Uqs~XrRSM:m%L@4^WK(8-85wszIfn^rTK<@,H<kɼ|a]s*!tJ*5'i42f'`R7o*MӦ8(CUZZm`gZn2ϐ9dvO־|MNe/zsuQ3/ &_yf$^I}K(72E]qW6tgLt@on48gT4@4_wA5 z be(P^<.NKl=_! +鞬|ܤ{b@zߦgn=bK5dgھrA4ϯZ3fv āII&xCt\ڱDw>TK_ CJz :)ӕJS-Ǟg }kz,wY-nmx49PakۍV"ib3plٰSl]vw)԰R-^WqU,CTb*~5Z}g-돯gbm|ΏغI~usq t?x}[.۶:^>/$܁5v*7boη{#|oe2a,m{}mm[C I0H_m8oRH5k-:8lr誻kp)tս|tս\zCW{\]Qq1t]]uoou/UY sYC3EW[:`{.#]1oa9t%6KՇNWKw ]]{epvs8|j^ޯTK"yods#jjqAj+SbzBN/^q4zť;~scv&g, :Lф'P[Bܓjӟ 6ˑ:xa&Eg^"2$`D538!N. XH$en)YCZm^bhS+v]lBRJ PO24n+R|Mg/?zF3qUqdoƤ$5`U:.Q$媥y)VskV!4T!G'S鵈=8|~bf]k2"*|I-Ehmӱ8& -3~FEܼ}4J r1fȩB4󌅋`1M02 H]8=lքN154IEƖ8gvlнP'j$*rlF'ޯY} =$nUC۸XMs6)9_,!ΰ`^U51Hx\.-ֳ mwK3X-\ 9@ =֋RRkd'CVeO9@cb%+l%pFl Μb"Qਫ^%k^cK>mE|~ Xm|ǻxQ?.>zށZVk>ʫ}>^ZmZ~mY9W'?kLTם%t~q/W/SVO~^^SW>{>\5}>;练|l0l\$rM^}?*Zp|~vqż:w[ڟ7ig3'K^yYwEԷE~$:?4=!&qI7bcLYwevq9;gSۃv͒.[jЕw F.gXGMD~I*ltb)& 3, Uo,թ @ns,ooڄPV+G"Meg8ڦϰ0 ^nQiB'q-*,Z*PH)ǻϱ0nt[z$!4 >ϱ0u[JpI#su2h 2X!İcaxg`R.+hfX򪚀HX?i4cpU7'˓j*$ UKƖ=lq'ϰ0w:a^NU`/^ڻ(cm$ )aaBX̵)o{kܽ$w|k MB֣&\RUDe$ M5%3,D۪FXM"* /ҹ7V <tU3\%5Fg/Gi TX20|--7J] 'k{c-C X& #k -t } ˠ%2ATϰ0e7:&#CkIf,g/FEŨUDn #{UeKD@UY`gX`RG\!yJiʕ*3Q+FGG[dJՓr `*6AE 3,4<`e-"Ql' -Qʥ ߁w 4 Z \IZy -L #@y(%MuD0 "t`+sjBW\_ [C?=[_e<:R^YeG.5QN'd W8:r~Qߗﯵ^Z y#[i= /[%1ȢfS=_C4~k0HQ"آ$~+3$UĭEGj!caxF7!S?6w)F~K:J=wN 9njG V|͙qɠ9#0p5ceEsc1J*$޻ę9F0< kŢZp[;Oaax֣ /瀲P9ڒXR.L9FwΏ",.fhYk9FgG9 U\вJ{ hs%6V$h ›caxa`q* ]A*ya*1զ/19?hЃ8ץ/TczBͱ0e3<Мid諴V)WXspӉ9?[% ֠Qh EUMhh.X|OW;s಻ˋ)ԕ>L5K3! n4WZťYL a;gs, jYMBIU2TLyZBgs,߯Ph{\?3$ץ5Tئcax!fB=4+9Ih-|7M?}Fޅz0UTUM>u8p9,p AtIb)_;+U4~h9d2 3W9JVEP&ȐnO3 ]0?2]ȷvֺQy686vAt:BW =t^jgHW" 0Uw7ЕxOWKMGzt֮}[mN** '~Q x%.Syr*q*B]/XKʍ+p)l䘫G[cgjh@/|9J"^iyiUG]HYÇ7p9?NT݈rbwJ:*Wzi/c~׀?z}ݐoN~z&^>;m}Rؒ? yK1U%wW0mLQ{޶l/]~nnPFTIN]3#ɦIrDAXə~p96TeŮJJY$[Gr+)c^bR`{hN%=3aai\’(| >FL!n]|,ȕȹ9!`V`yA@bBpQvL hujZFbG%R.IXaք8+W ,B5QiQ(un8u ˩s-HR,xͥwւ;t*w+A(Am9nQWq ]λzJ?*o޾Uf2H/ܟ΀s= L:m˽FעW\iz{U, [!+DXqa;M̄ܝcP]O/;!4 2LZ 0rs;g~>˯'7&(B}Ayf4&9sjSFEIo0$0m+,9ƌ@M'Hw )U~o8d0^$DlR\aPx>7Nv5>'3wקh͛0MhšM9K7F'q]hwwQ0uf>8ZZ¤旭.`R3qN<`ab>.~ wwlS(rq-l?Y I^f:GꚩHmOuG``o0$}h',يrT|WC;vIɃߘa~|F|9 n<xYtl6LrUZx!`oh/>EEzc&Tt ]l'a[I=NОZqIUtPrMC)? haY.S=Y@Nއϓ|݊[ 6J%BhT#,46n>C*77CΑ8䥹 ֿ`Tx݇q,eր{lKm.P/qWz9̆k>fv aEW q22ed"C,ঃq& k{@%FEΙx#5NPw}:q<뽋Oby:TtBgLO!<hM\"<ٛ_¢^5tT(f~~ё]\^Ë -<(asӖa~w;>kf1oΊ†o& _ӛ>H v8[yGq"SC,-ϥ~v nmE(D?ݽbNSYPKLn{/Gpo֊UT"dk*SGYm qYƘř (dT<bU8I̊ĤHF +*VA[ΰ!cH390H:5y!N(w9 5ȡ0:Tfw 80/Fsi-0b e (f8+>xIGqtxK)l4A VfrRWG+' ]}+[>\0Dqr0C:۱3w܇ 6yY~abezb~`F 7;ݽb*ۤRg'Ŧ鯽7ۡ,{3[gšnjMou@&#Îb,yNJF2pEX*q` Lv@%!QFzt`S.q`x6Lo(D4d۰$ e'X1Jʇ{d{l""?=y'?3};4Z(¥MߚL-E[x66~-KWf|$zCg=?bN’Z'&, 7q^Cg0N&tRTu@٤ڠ ޡ<(xX_d&01Ә.ե+jRﲱZ/MqG^e7g^lRI\(&$Oۤ72[=O}2.\)R21G_m&;.XœidC,ǹ 1deI2f@9TbZpJ{bWP?Sc(?I^>ÑC繆G hCr)*PcE|-<DZxxe3ٱc:y]<M~a aK–|b#M 6 <2h$oMqP@`'?J{T), r 7EDh-6t}23g4:@9G $;Rɭ"נIuSSD~LQ4ψdhF 7ܬ\ĒӠc 5 qgV )8Rq`7@Ȯ܈ A%F˳L-МsWnhq6v[\\)i}ZUg[P`E}cЦ@Ki[JJyux(\S~RbQ(<̹H#lKuPe„xx2쒉):3,ᚿfIjbڔ9ƻ^& Ed;"!r)#g)\Gh"I>gF=7';tBWCwX*XCe\<QeKO^. Z`\} U`**gmZq#R,X2 `EPL \qsИRTJ-9Db5OzF}NDJ" {F06cpr]N]rPsN"HW%(_̌(]%!#@ uBnL8J bJHP@'-rdX`P&ٗGAT(wA%*)1 Dkr+0vyMVr5tj(+*Q?/I(-TDXn0VhU޹IT^N18VǶ'FeIQG善Քpѽ"ƛJtΞsR! "i6L>>nf5bfvxEfR/y4KP :C{]N%69vʭ$$c#a[%(^JVH&y7C֤^[n#݉~P LD:ՔWkJJH5 M'^@j)%u۫i6yXQJi)8[)ͨiq`Eookr5@/P"w)wB^Fe%4>VM׊J,]k?1DmI"E!ktŻrA$mT)o+'3WG2.k`k`(ZY >50:Y '$DɁښTPV@ F5G5IJ7U)͍ պ)7kkRYyVH\ߙ#n \*)xҪĽvּTb_)QiUbYEC)DYEG٣.%=RAAkfe*" bAa^؟B!A=Jf&sAa7'u3JLP}E?IwJ%a'(}̯{ofv<9~ychLȎY.zJ/h~]S_Qp !&cwL*5c__7ǯ=&J:(9m 펡Ħ!*&NS!I;ULDOudڙDeͺ᱇\cvO#^.1FbK*Rjʍb|y~z y?ۋѠ|HĩEMS[2|ѯFj_ެE/W So}d~*c&LSC Hw64J?'Mx)1f=w9uaYk 1}f̭Zlv,ڀazל?7̧1@~,"PT!R|Z9[1i&9D?{WHQ/; ,x`w> =;OBlVI]UKʇRRLKl(:# *%^OM{NIj`լc9nt=axID" ,/#\Zw;M/ZPjW3/%e=l3I!<GpxznƙB|]Mk@2RRVLxrO';X#`j;5o(BoEe5]:!߳齣LfREaiJ7 8*{lrSƄΎjOvQQk3E)=NGA$#;rHscJ/nd)<`5g wl?,p10ZZ`C0 vKV@fV$q] k>Bf8qa<U,AI cE%0sށl?X,M,ԊP2WN1 ȼUB l~ j6_daS'*F Yax &#i8: _`8|lq10NoNgb{ݫ^ת '^ozRcW5U(q=c瞅zo#I Hw@^W/g-Fkf7_UȫKUI}S X4j $ v-3p߹)lH Sdde#]x/N2H9<}FtQ# NJ9(2-h@A:MVUlVG9/^,u5sI @ډIT]q@I!S>1oUPsJŚ.D%dupu&sΫ2!o'KR"w$. h(j*QJU)" !>DVʬYe?qQM@:72c%$hG*BH1!:Yi(,ԊҠ".[I1铻_ O"U|`s*"9Q1/'VX[Pv+A uG(mlѣ؟BG?=ՆO0pBeF )EÕ}Q4.nDÛ{K|@R^uAŒ|ç);EӀaxD$[I灧WbUTzaγ6&XeXz<9"^[[ˣĘZ˜. X`L;\)p2^:cߑ:'\r ~ofv=a>DXR`,xtX&)L-1Q&n Bl8h6`H>|a%6)5*B!I,+3g?]l&P FH?)x(l Ķ".EhktqΆr +02H(\c)mciDo S&qB0Ϧ8,,)s T >T+A|9:YR|+N>72:8GEWJFAJhh, Ӭ6@jL1R  B(02[̋gf @#IzQ>fk;0VIeV+ ˇi_[S&Oj@cߜ 38A<&XmeM6wFt(|hIc9nʷN"r̃3|pXe\qqpi?\E@)z>UzMKAPUD`it0CJ!(Cl0\. ĔW٦ ]x5' ꬮ(p!~a@*5a*EGMi3aaFxO^@Jk{l@XBѥh$Vp:cḄe#&.dDC\W7FUqsU=UfǏy^gIN \hi Z|5WZ|5צkb1yERW JYF+xxYQUE, #9"CFY/__~[,4ԇѕ93W }}WRGU^Mo[׼ZTE'I點<>] cH`t%@,46!٤|&À'WW8!48^Hs3}}{%jHt %caO7c`IIO I{rLd `B#9 ̧^Yf/ $xI 'BGc ;(TG}a_M5Y:~D T.)Т07zAmk Xy}}ڢQ-,p!sL1B3#+ N-eؖŽdv5IS>Zse|".4jWQD5 ]Rz)!T8%!X0[F5fild#:~xδ |+Ra\OODj0G8C؀VL'p-ص_NJs#  `I$ʩ@#E24W ac9ٺ]D biRdd^ V'[vBqoY}d'd"JRn1 . ZuY% 1ʳHʏrNf2 ,y06{( }g;(pk^$bXSD 9?b S+]Kb-vIL&//58˅%Y^8s,u_^1_|J3^Z@BŁĆtΫmYHR,۞PcT ZTD TJp@ zS:WK礏d,lp (޳m, lgbH FԹ݂Q) Ș43 R UDut h`eL;wUDVE`}PfWgcl^S1<ץZJ4gӑ 5b5oZ]1[qK:۬6:WUcWGtօf\1o+0V7vx0cVuqq})W8BJՐdw{ tqq Q0wKg8bd;k]H*j[2s0dt1vof o։o[\m)㜒X\oGʢPeZQCpX(-1ްuX˺Fg-3);D3NI'}jɣaRg (xZfOdQWqQ](d[01P;{ڵEYV2p S>j<3=d(=Br6jڱ%ib1^;n J8`r}L?q´)E\1xGJh/';GXBXT ޤ2o|}QΥ) #4+l0]J?!+@PS*(/tWNLr.~Yn&20lLq/nˊ|b%9W%%Y~S.:(4R+ 7h&3dZxq91s:᠚T$;d DRپߎDyn8q" ptl<׍`Ņ/ntɺGZmrET9~J_L,FmQ%d>/S+L;`J?~퓆_X+F 5sR&(2Pi)fo_/_2pըw-}zy˧A 7c0AsPom/~_U:ӿN6LY7Z~\2IybƂak/r5m^Vu?<ನ)-d.lMݺ.#7MO *4eIS1L))uq/I]琀BJ)̴%b5', Xg+/)HK j)$@JkP]bZZd9{OBOݘ%.:H3CZ}ZgQe@Q2s@K^8{FϸV{h~К9yc7>qUݫ'zTJR_{3d,AZ7x.GC㯁WDRĦʠX{ON sQBŻ&ԀPJVi%CI֧ʧI\N} dyzqѰ u"wQS54*ѯGal#iY?1hvA%m:9蚗TԜ 043bj(Ї,y]נcݫ mmjlՐL[X@P0-N ]8} < \` 'ӂhȴEDA.*i ۟{ܝ؀fC5F \ACÞ\i^Êq 9 G8Ň`q̉F ; _5q4D -jQ_q2i7m O`^8%J᪴rTG2]I!bx糨a! s?v/VJ -Ux CW52- HgFU0r07rw5yz+@1´:8 bN+:8;f9YT@T@%otԸ18tRWŏBkq+ʚRkӌ/. LӬ}nvW?YkW֝ DGՀXohJ"9 8K X8neb>w;&>IPB"BW*zdަZ鶊۝jFŘGLOܻQJDaIڠ,Lh%r)q6HR`g+ŕܼPLBFQ_rC%ҋ$ c(HJJ@J0Qc_ȥ%غ`?)Z5v~,BQJ`5} )E LLs6MˢhkҹT[,@<$2*!Hwʰ+yutʟmXey^ŧ Ȉ>+>YUnӔZjz"ohbFeyCTũ(~gOtq׹cU6be'߬z"ҩ854Ew̥[?Oz -r]8˹ֺRp}_U>`N\~[l&y?zW]%/,.Ec]~[kA,(嶬C/>L@ߓuh:׆'Ȗ#|{iFUT Xa+tlT}@)0!#/x,pzwM~=R뎮c AT T[]Ҩ@A_i*ynde8z[ =cuOdY 7oժ΁V5d͋gYQzqECra|,AXs?q̟fTf,<,-s ӶE(:ӝ39JEUFX ٰ#o2 B)2dc& '7@r H JH&bLtDJ2 h͛eDUkf4?-e=@e^%J\ڲچg0 f0RsB K!Nx =9MM1b3cwpp&P"=RlzF'].ז}l WQ|' /]w-nx-d}{-s\$ASes7чW7uBί?LuNaw;*z Ӝ$D@hN&Ր3\VBF;N&6]c1 Z^ԭ/d>eNmF#Mzk'eq=60֫_̕S):u\aY )|?rx:ŋZ渼xcI7}"S*tґcuN yƙDШ~'4[XB{JMs|17`pajVsecc>=a˴>1I,05O'$yz߄aјK:!7r]9f{6P~VNʃ_rTcI6<:ͅ:+9jT%9)=#`k"S$o=[2ެ0SXu|u-s$2c?u]`;No |?ox}de0C tςk]sh|?b!1spZDC2ǨiI+'`z>)gЌQpNM3\`ѱI/38|' wyT{*FH>ek7$at@,WZ . z^/|K RMoͯV٬OSa76?\zʮfj>+[,J2=Z`21VY-c[ߨkʧX5'.̝zV=)sR@sϭN6b'9ge8b$vKV:ﺘ`JQEy5k) 3x G"9J=0dh[ڟ7߲E?U\)REҔIW~7f)24Ɯzq>L[ ވʠXzV=+]OA6Ssтb@!lǁxf*)=LV.%TҒXb L⼄-s˸9My)f ҁCryCΩJ b; !Q34(uō|2G LGsxε} f 9ٝDCZDHj#tz6si%V @y._fii4b`+F,4ʂr\0E+_I]ac }Pק2u} )"h].wp.G/$»meS+ x l9 H\\-o @kՎ&bGMa5`(eG xĺyAM cj@qzRkƐ-t"t_78i:WG߹I, %Oy`?JcV1$뜥!?Xd1~N' ϗ¸Ǧ}w73@3):EJ#A $JPJJ- { _lҀeԣe-^X<ѯzwwRTA1eɌ Pa>eՉ[ÊxB*YcQ4f JeN1,SUʰ`ixƚ3tڵx hS]"/ AT t5͸r'zڞe54M \AER1rڧT XVF9*[aq <~ qOS)s2Èk\OC&,a\J?m̻XFyYZxUKڊߛ<tosv -lr5;p1ma"AppwuN-s\^Z$RD-oӡj#c#.3<Nhva F"A!mn "cU6E zy8)kra` p-O4*r }N3-I!wtLm[N|g9ٹQ[Უtv7[?"&mPڛPSaj}.G0cÖds(/9'n豋F&p:;鋣.n`L)ᒄ2hY8)#7 9긢v>vUJv8j23b_(Bv0B^-s\^ȋ>bRTD.Xvp}щT \uRrw' sw8ttr$]ؕm."[n<Anpocゞ8;r[Er <.`C6wMYgCeOnudD}6hBK5rR2Za6?RO)NgNy+.Z$#[89OO[ڕ*)T@~.-s\RK@a#:\"| NṖZ "= ˬ(ehF6h BTGyU֝lǒkkr}h.q :#t@hrElޱ"՟W[gӺBZg*Y}uQ6l:1_i0"Ū*_/9K2"B,wsPEVHIue\ъUFXU(fTh3Y&4,D^ӮM|G~].;fZM'x{_VWF7yl̦ԛ9۲1+'8 ;D\OʥnU0 ׿e3"2#2ތ];LqLyCxcV3Z7(6REMbBK]aCY0m<2RWя&GO᝝PFZdܻrS/xRC0(Gkj ј>S߭.8p%s/00q--u:wM OJb{HSs0SAWDcR)ֵ{a}-=qM@a=tH,m;DIVH 7;NDy6>tĸٗoCf.MR;Q8Go9F0r1tG`N,q`(q,B/kD0Y-:=o!/T(-A(~#Ų^4Tk7Ŵ[y(O>uԀuz%ۤ,d7wpsFUA-Qlht- p]hE wmnEKwwQ(E]|ڊm+)<ϑ\7;zrÙ̐3C$e=ՐnCnhSF0oO1`>=n'е>lh:uy¥+m2 x +qbч04zLR(nNP扔L$*W\v8WȲ(ˍkߛ szdԨJ.0oqӂv9nC)Ґ vS^n=Vk9*'8:hfu]倬nQYݷRQHmŦ` Ͳ+k}s[`u(Ä4E(?5ʦ¸+dM1-x4Z;_zs:x ;/q▹(&:9lSƎ1E)ʮ e^!ptg%-w {|46#%^U+^nl㷠T0˻IQw[ѩoRRcNR u_Q TwܣHQ@axo@,A&$ \ޙ< hxAwOYdୢ/bLvr7*_9Cqc(]r ύ|$x?MMK猯u+vt59?5vGMƗTu:0mu6gk %JS4zxpr5?LwyapOS)ɫ"Y8 Lq}Sa_1Ra:9_(13b}~[NhvD t؊`x)Rc%ؚ󗆬Y$:o2sͦ;gފyV_4d=Pvz(:r"6bo}[jmptZk4[h! cb ax9-V%A^4tv2)Eͧ)aV.^)ov$y=τjc,ިjj;YwX?M)a&G!/{L_Wƛ=b>&B0bJJ["wh1~k5$ܤi^)oWumGXG*0'jDN9N)4AÊVoC5YtA\*z-8W9s8W9s4)odT*TbLyTXcI"2DX`(Xfbځ~ ,0\ם6RʎŽDVk20,4׿}y_E6WFStQ̙hxqsEWUjJic.T/V5Oȶv1wBbO*Jpֈ&iw#@ܲ|G]#,qػNj&IOjw ܻK׬7J`>kT_MoQ3˜95k3Dm޷7\yIaiqʚ k5 Dal YqMk#RozRDb`sVA)XBxJ-gIyԲdZ'QH7r-+RK:!:ԳZGn+1ׇG,'@\QeǗ )\WQ̄Ҳ`!7+vR_tpd#˄&='SHW9tSHW9tL!5 }&J*VTVIaTKyJZG#Z #(U(TpdXm/?黋\ms=?п(c?K&|G>_ ީu8o%DUٗ-}&]> #M#KF\eQbcOXp%lfǃwцA0Il O\ .#)Ǖn* 6h}U~AtH:Im!Bd B`Awx&WqX~8I̽(<L*ܨ_-{ZpE1X "t@&gALi%nf8cX:w2@(֩ % >B7`aR+?y ʅ1OERk,+.z`K!)yRA4hL/yx 8b"0.ެi1ť`-bbԠw^j-n\Z&Jl^ (-8QUD`!8 ( s,CY\60Z\GC1LAH2T.k'E4>\h.uC%N >XDbE=6spPqv;Fǒk  aœ@wqt m`E9q -((. 6eGl}#PrɔJ<BıBrM-kj\S 5kj%]Z,9mEԠ`KrX x R('?ǵx::Hl7 XˉlͱJ )t./ /wRa(N8 Հ)e<8e"LuLdļ&i-# !etXJ<`AaK ,DD㉹P11܄FᓔU.lw*kbx(38e0?w*Hd3ya)u@`X<'93/*B;!cg2?.zlydwF{0)NL&s(Qg\hMw/׿腁Mco3yC/~g)^@eS2+۔4My &<@e\-qF].AS^ ?֊U-]_/EZ7%u0J Ak &T2噎BRaJLGW )_:Ib'dXm`#!"U 12 X<g8D\@&-hV@-XUΥx,FP ,I%bP6:mKs+hN$h%:GL,JJ @Fu,5̈c];X(k5?|Hu9YjAUK>dQnՂZʴ}, <cIhA=I$)A_ ׫wDt{ ~3 (ƙB;~vAAa2h0L>0du:N#ǟRe/ Y6 Rv5><+64wq qbY~1wwR`;x ΎQ_ 95(I(QU+OnJmx # 6߲TڀShVGɝh;z~2i+KFj{8jŊhL>9#5i`Wsj+TFQfN#Tmt^ގn kfh o~}eGt5W3{GaO,{xD'n* T_Ln;,:/bRE@P!4#(aV:콏Y7\]ZC oD;O54I1V-dm:B,imTp!Ϣ9&$h R"@"X*n=v|XdtAJ4 h-þg gQ9~$wND ,]i[ͯH83^SYmR3nfAS8g Lz`u^d" N"qFeTA_bot50^g.G<W?]K9_/ML_M5VXDd#21$I'%-|NǕ|7|՟sCC2=GIN䁆(V `g@ѪП_~f$} >|?ӄ;xwg*fd oa@0pn:/C$xYi!xڏ_2& s=֤ 4`#\EydE.$-4u^lwYYnYCȯ A|[H6(WQR6KM6TMP^Vsx0֙(Ppo˾XP M;<60k;S~Ys+.kH_ SXz?XNİHQ!\d r8صB._Ex>GC[6بi ʊ42Iy{YUR[$`U8rŝv+cN4* ɥ^q|L΋gqxhݽ<hov|㵤2aFut -`u+F tAo$&tH3,E.ut@4ṴX.dKP#M5c8{hFmRjy[5| .Dc1Ʀfnx=4K f֏dG:w MINj著/5, '+"7Q\vX0lgI{8+>M0[V݇! @0$~(2I }_ԗ%`"zWO`T:z/FI8O'4ΪsmK<%`6 [}ޅم)\i:oooU)eϩrس7ekwf9[K+MDsa8:_>ҔʝPydD w^n{O›?*{loo{QbP?[X~Ta̰L]Ihm1ǘc:޺mRFf7QZb #FeqMS!Dק&Š4ae`r$Pj@mF l>fz@S*ST{kw`vl}iĚˏ):n@'Mԋqyo(x/'H[!ōMa e0XPhի%.Fi~L1MQX%or`(SiSZ%q ](ŊLA12*2ޏQ`YU'.[1e(ѢY9,21λP /- K;4wZ] @|9ul;ymh)Bhנv%Ֆ0R#Ni+(mѱqq_H|ADjl\QஈSQ*UK⪽ t oLM@9t7@*|`0g0 R;gs7kثhf W1A(8icq5h| ArGfXl \ Rp VJhw<8WG{4&v%چPJCٷP> )Ya>ݝlк u$8)2)ZaXXTc=OXK` j8{ւnC%:JNrPI+ \J8RQI-I J\Ŋ&;kSw+JBI*~YJޠfP?ZQT&bb%! O#x<헟񻷽xG"z~s N8(@H5"_,;OOޞ$f󌣘 9ŲurF85X_s {3myO۹'8ps[hh>8j%&9 xiJ %1o2R R76NNц&e6`.˖Vq6U|~?>/P9Ӯ=;&}RhvVIrE`50^IJEuޣEMxP!kejFtU\ &0h"&u Q)VJh qK\o =zNׂ>JuHYovG窥!3kΧfyKaĝ tT#"n1qThP2D]؄r4Dz S.XmjP66 (&Jk%\n55 ֠hdGt,9$"'CF3YTc .Ȇa*s p C4iDDo0uA_܅򃵀B".ON\8| uhlQ4ge x+D<$&)up$&tKA% ?7 +]d;U)ɩ,fgh¯FrK.ܮ\~߯ {g|gw4D`9BfO/ u̳$RTao[mbv16h)VfzcLHj c Ns:ݠo{4>@ǴHIE;r?}[NfW҂#rJAS="V0֪mra!H.6}4ju}ZWhA |>琏3{<SOnz~ӫ8 ,HJ `TGs>WGKiV0HQK1ITݲH;j5 ,p1V/W:8'ϑJKϢR-0ʈ<il$R{o tȘ`\X[i!%E9MKp5`4h4; 1;2 +K9Pk'p16h }mFMޭ?\ ,4+g!%A&02^p5F)IFkB+#R&Yۊ6#foaqWu ;ElL4zGA=OQ!}s3z,Q8襸:vʉLsS9"~5_M+}Du|//|E_=ɍGĉ(_gp$?VGFCHIr qӟ74{IpJ_.A0Hx5}iEr%G4?'kt+m. $G,7#w@¯*+lz]jy$G < *nf&a,vsWiK,ɱ1fna3ǃ2= j,[,aGN#² kkf:h sk2p.L_/&gԌ T"LPcw zxQ8Xp?v?woz?9o{rcnzq,:]KSF-2`ͷw~۳Kۦjc]=eڷOzt2~![xӘgⴡqBn>dS|^Be*Xi>XEQ@llz_ eNFBNK+;0mrOljC_6LkҐ۵-jv(X)%}\G0s4kX޵q+Es u8M/.P7)؍ȏZr~,G$J%gș aZ/O<.`++d*O:UlHjKڷw~i4؅}Fb\rGȝ{?\~xb]tȜZn)y("-cvI[{őP*}$3V%M+P %'>]Ǵ_b6 ]зM_gn^TprRp3s+ʖ{Pj8)V?TlY)E[ԽB/ e˕B hna쎂`%a=[˟9EX*jR8I%`o|VtTjfz B<yP6H3jM w'CJ]Y@C RΨ*'0.p[SPQ7Ơ[AhV0n7y.KDqꁂvK~q8J(ue`a,b0V*6%T.{1bԜҹR桎Ji]S5d8iK =";*>lA'۽,{܏lه;`t߱ە0KȾF P+]T z#JRbF UDL\}(3@%3xof7c?/o2y?/u+g_a[ӌG C̏~ӿn} ↗7sݺs桟_8OXt^Nj#*ٿ'dzs9\d,pels#R~㧣!{v|.!Tyx{ڻT֩@KʧIMY?L2#NJVlJÁGp@9![a{*2@ P*Ju Ty|:<}|XVfPVr>D2o/J)Do$ݟ$.,8iwN-nu̢ ,Ty69D0V?]/BxU[w:e q\/||3w~M42ڌcyI#Z>`fް"0p240;T@{kYMeYidhgo?/R틷UH nS' yOi@mX әaףMtb|yI0䱎y+3.{0իoq̮}"  Ӹr},uf=7u:rD;0ŵ^\k/*r38#xƂhc R(x[׻[#M:$xtxN^_OhS3r=2Uf  0~#_ژ7\x *FK6 ^"5ɰ~FLf֟e~t}Oo&#`eFm_c:{˗~.d"*' {m&kۙ<@vѭ| /zɮ/*H"a"7&=HGTYңXZM*ZRr#@;-TCVzy$Fz(׋s$5 $X 6⯣t46Ln<aw63b~s- D:1r)࣑BX; &F`.%jiD(}蕧K+p*/_"B 7 i6:&?Yɬ'LI&4x-Xwd+U3 .sxe45{D # gE cU%X9mw!b!zJ:)ZE rKu" #!Y0wK=]j<3"1Q-hd Zdd93b /_j}SKx-7V;k{3^y FJjmWi-^4E%aX;vT:iTmDhZӠjT#6ɖ*VTz垘J4 `egķ9k,y%$&NlDJH<J~I5덐K7+ 2S!lmGr[M (GwQY Zŋ`%⦜u$A WR"7V0gjzM6+Qx=_IL晌*D:+gM=%WrĸDth<'Nj z"efG)mAԷv@14l *NpavCJޢ]|5ڕg,pMˀuIuϴhqkWVb>2Cwc uۊqJzj1I7Vb=u)v9aƃny(e,j@2Iu A򒒫$>=[\7_˳$s!:kdyr̿r1!ϻx2fXUBڽ1`޹ ߠwM(28@>dB!oC4u I0h-ՍtRk\ D MHnkٖͫ?|FNfro낌djF  iަ<LgaK?MOGg=6Zн&1#9F`v rpBp Vź=VJ4#c5-|.c+o%c xB52O; IY[űEd ^S?U c\^Xx~;=S|rʮCc_ፒDŽ-e\`FkY(5IxBnWk[u(U!S67Rꫣ ^$fr mFpOl[Z(E.)_LY5 b<1D6s:Ʋ六0:L#5Cmp>/,&dŶF*-` OrÃaziu\Nſ/BsJNq7U#D= Ru%el5(OeUnr(٧jESkӣEb9bh AԔQzNSo-\{KPOQ{ku9̥Gێy)iV/Ňf_<t-?}? ͋ UW~i / &ёBxi\t?}hg:o~螜}teTySa^c7b4VzXj-Ϸ*_qv͋yZ z!͋ x)8\=?ރ^`M nڃ6"dn<AL?`&HrO2h!c9EYbY˶V9yEޥ3zd wW)|B!͠\kI;1̣ˊXem b>1XAvѳ du.DںL %qar 1Ŗ \6-xfX0 f8D_ EnxO`++dfGl-j|,[[y퇓ɪt.QEl QqA5riˮ@69P~vuey3o^N b;4,õ?2>=zaJ.rJ}~?U6>fpIgЀN6OI0pRY>V Jh{qmL,BR+//g6*}\jvWen>=yp{CjzFRz[02(TOCa#Hʛ|g1Pt#HjsNvo"2rKz"m <8[F);| l+xtk%с $7]e_.՝cbeٯZg=;،x9w{Gb=9?j2\&4.N,'=*Str3b}gb߮sd:a jΉE^PH:;C 73ѡ^-9A;g#QlfH7`|wv;T XK8Co d:VnDr9Ctfh7CafhC;FB󜠝<ٜ~&cRnvHՇX݃B;tkHQkI|Ov;Cg7 mVz__]uw/ג] 2DǫqF K G@C3vg-TߌD t 6ԜsĠzm4Dg$:ߜ6Cn^\G_tvRgWƲvUՋ=,KyΊ9Gޞw5N7˪JxuU]_Z3;oCnCFĘ"i7"zK{uv09g?'GN3S0觟^]p&y3ۮ}/$fwWͱHYO\g;9;# 12q/mt^y.3#YVx*dz_ӢŽn+( uFU8 R *9JhBl)DjS{wkjJZKȦK?!6݊c QZXB @DHI"g.M\c!VV;P@BzwhZ:ɽ&E)&LШzE R3)3}`V#a+T$]jˢqwieOW$hFwK#lucErfQ[u-gt5L!7s~#CY;1p` P)bN-ZՇծMy}̊6赣~t|| ҫߞQg!1T13 bdb$\z-{J1RX\3>'ndQc%mjn'hMM-e(Jaͅ6(.o[,*f]VST""zlCȉ1sg@C3L57F]`\1D[at-ׅ>@V#T rp(bK!̵ODL됆nYYb48엢פxkd 0j+Vp[1(,{ Xj|\lA]vH7~RxH3̀ &ͦ對'[fy"I5 &2L쫑n%9')I,Zj~/^4akVRTNTI='ԜOT7-U7pU;؉P+{d>U񉉜G}_F{ټ>#vٙ07#\XXƌZ]Px ao=>E~7@L5) :5YM. XX@QEi 7PԪjB>Ptj5BR.jKEtЕj ZҎt9\Ր)~qjlx |bƖ8&]]o+FB(Qbp>ӝyg_;:]v.; p\]uDE;$VhU?g̍$4h2 Xw lQmh70+B5'oa]s5yRsVNaނOXd+*]ac %$p :N!5sfupẞl,lbX$GTa2)EV*o{R>QN}u 35H5J#=4A ANT{tWm:a9LI/.?0ڻ3l{ſ7~Y]/ #V>82` jPj9bM٩S73Fo8 ^2_=n\Q^)rWoDt+W_uSk0U^lP:Ov}z%..$%AD waS9w`9;J'˙vX3 UhۄE& xR/TbTru>V>'LcQQN l)USlQ*ע"FTZ!TU<DŽ:A|$ygf}g#qʾ\h2u`,Ni{bjqSݒ6 B}@N*}|Ơw+8K*J:6RA}JGeoJӿKUP\Tc3Tn} 3bܧ$ͦW>w5G'AN[g$'!DgԲ`.]woۅJPi/6>oƪX]$\EjE z@d]3ic k%gA5\lVW ߖBT) %Xk"h7׬fH'[cn-m^!w]vsgpM{=2bxsKՀEgXqFw||yxn<~8|Ko7ml!7uΜɵG%mَ]'˥e}ˆJ{pC Jg@s%0KrIN%9us rq@˒$N7Be/HęV:ωŲFq#wYdeWUb:+C/>xCѓp0@96mYy5IuReΌZ:ēw踽:Vl6?ÎwD+:lgA7_Dr(Q`ƒM tח{^lP:]J;?@߽ǠF;^>B(%+EViTuc+Dkz=?0#*E%A՚bFآ+dTVf%9DפyV=VT؂< L.f!R/759}L[h*WLT!J8R{> !H\>G6,JV|կr1kR ;9Z.k~ύ:Q#S:kZ 0(35; WcЪdkO(AGPqHR,0 [R#q{V9;{d-B($6[$J=N8" B84\jQs9jY U.?=$w@Vu{]fP! z̰`jfq>;c>*%@}B_uY}+  wz ƥo-LFNd*=<ȉ秢N=t rI@r#[@e|E@q[tnQ#-nS (*H>NU]vb>%hX^ܢu{hLG(ܲ E>c(PpHX:m9VT< -0Xrf7FXtifrۂq*t6: Fy|dwKG8Gs|f##ӕ8/~YGM_58\_]tT/{AjFO_ d:-Q;a:7wRƤ}TS.yXޮf8E O-aL%EZbMD{.a2=[iR;D =I`xG^ػT!5e kc5#F^ܣ=o=v\?{I (A-ժeXR 5jsPTѮĉ[B&xXRtjH:O.TuR]zvת>ѷ~z V ~_I=?LoArߝ@98zr}ɫ]"}PtR>j+1N(} !} TC]̅~XtӮr(/t!-Q~!?8!a ?%BGK]}ܒ%pх^ĆĒ0;9MԥZ&;4n`U K3lt:~j]^kD^rQ/W[Iz޼~j)$d^Ip/=!(49A b#mNUk ^'\,ä̲Y #A8DwmͰCד QAK+$&gߕg ޶{a.kK8o6?v61G \ Wz:)6WUbHzI q?F0xy<|1?(5c/ðJ'㽢iXy1lfQ&tWoz(3<={?N^xKSqQgWًߞMF=eJe۳9$>j:vJM@Sf T :J$>dCx{"Θ`E_` zks[Jn R1it/5R1;{@r}{һsUF9|Q$yzj[ZRX穷R]'(ejVb+uG#RKoJf@6vn;Rd4ƅX KmD݆̍+wLlǜviaхv)mj"o%uRn)m+"_Kv\ y;ԡ?INOI%OeWE\hCo"doD]-T*X%Dl N%kՖҴLiy\-hՅ'CRaCfaix)^xjs ek2ٰ*T!7'g(~}Zi3zW;S6-tӉASo'z1dv0|ˬî~QC3˾wOLya߇959,zTj#0'3RO%OcSЍ')x:XLRi! $R}r F ?alWiep,MulP}$#1"«ꦚXQǣSj:@Q1BY${+1q-X0ktS&ta\F` .W a'ރ@oo(naXi UD%A1{),(N.Tmb R7i*0eT򔾒)ǀ)\]@-1t[I'! m ^m6,t 5Tikks[k!PfVhKM0ɤA}Ē:,{Mr b'r$)Dd^J> VEm^v!SrV;? ^< M+E-xA}M] b^O'#0*z 5R"ŬYV5SEn(e( (kN- }y3'-eS5aAV|l0b'/K|F P$@}LeațY  DkDrxcI#2itA}~cJ:AqRyQZ7Uu#oQH@zp9;:%1J>lr%Ћe)^=mf=y.D4b!BHXС]D{%E+FwOL1f|t"V[EW³d?hJh@@Ԙ$kT촃 )L{s#yF^g Q}S71<}.s<Ü MɟsG 7J ur?>m0&i; ;~^n8kXe˨!˿LDSÝIz}="L^qbOit.??f: 5-l?KIh9xh\K@Ffj=?i|ן$>@*X[sJU,ͨY2Qs5oUvUe7M]CXor?yKFyq?4lrnWV*ioɁAYjdʍ%L1jsv:Jmj6ITϳGșd ڞG-ShARhxtkmM@JZF! M2¬䬌h+"PK_E4I+nFdaF8Y/4{b&σiO ,%r%aDdBW0k/;jʍ1mda9tn?~1MLۏ>{tL}24YTmBoJH}bt R3,9P_rZΞe'Dd쿭_ `vYìءF݋`PiBcC XP)J2 `b&d瞄u8T]" 'y~ɾ6rr9Uk,ѲqE kd6!+]ʚ52Ǒ@}^^R$&c:~?^$!EH{QM;k 70g2Á6W\2IvSs#rsI ιG͇r9c:^1(u}Q Nǽ?=Mo&;KM_ZZr(+<iK-m=YJ T[H]tIPwT~D#BG3hrcL7C*" B.B V0D.Ui3&4gV  ƫ: 2 R ҂TSfK]8~:<g{ym3{c|:hl@LwLc'V(u$itp"h@Y-hlF:JiMڠUifIQ~T9u])rwow0\cxR#8hZ T Z3cxƌXDj{n:x||+gBm7-mP6C|fIE"9DBc LQP2yO7" u}q\^g ;@5VrM[e:!0d`l .VF8!Kf *.yKl)džKW%>in䴼[rZ)9JiבY _8B#= !,'M1l6n5S3D\$%4!{f-,ԫu[4bGM *S7Lj'7 Pewp' j: } !_|s￿FVj[\s+H}Tz,V 90"}F,@6~Ȏ;"ӆVZUS[rqFuӌږfb|N(4aY£/J)'!7ӊQ s7^0  cFY 1߰Y{Ms|] t A3[fz 2:D*)̨M c(i\f)b%.1}61о4Gm ,9VTnim ,c`hP {M&Q)(%i|3(L^XVF)O|_1")%.!5jeZbvgm_su}BPЂwjJS*h(F['`!F1z]b,Cel0܉ȧáӘP: j ׭ư Sv/o>AyspC{P2v}4y.m#*?Nȣ?IQp2"8{:Ԁg[$0N',$`V"&KLנ'ZCڐFDR#hOcfHJ1#ft ۣhl%+߼e0U)iR8X!Y+#n4?{W8J/w=bU  3 0>̇݁!QR'$;E(ȑc6zcYȇOɧ[G+4ymMxD{RrWhHyfG-vaT0 N\o~JJ}z+m^mOL6A !)YQ\P/UUuj[f+w>w0w^\Hz LJYK=e.}R(-9fBE)%pp)dp@R80`Ŏg,5T¡M ph7áH2l pF.99'hFAŘVZlt'z´ḃZo.Iax2^r^;9A7/c!| 'G?B3r|~DNQ8Nd,}gV\ )&28A8AzP r6Ps}pF[W%d$ NTIJʽ)ҳ5OkiUx胚r9* j5lMrßܙFlUܝ#j4r>v$&0S>V]xl8RSm nv[E6 )ujۅy-hw=Bn7hQpb;N&v$V|dGV@bvV龳Sq"#ivȡ≜&rmkŽuX[w߽O'.zW&7/+[4oq}3:)+RYje\@5\mjoY.8*ʑK51xp zRRkեVyO7nY: Xѷ|1]>8Us;*lMP5PTD!T^QNP[Uʛ5.W:/Lظ!#4` Ac t@@~9}-`t(x&7_9_^OOROe՞y n䔱ylvMnɄ= CXk2嵨%U2+y"2c t`i(>hյt щ" JN)+ȑbJog0vcu=;eȴc`e`k>ܭ8_رLo|+ǨG8f\deU7YK >OyQW6T`zf!zvsu<3Fi$7Ց|fv:W,8H;:H/:Lo4L:+._OXl6Z|DE>6oqVMl**e"SUdvۑдi6CW~h w-ۏ~h6[}4fz^sa]Y<|6/E}:}BѢ6=OTeW҆xg*j m#aJv*mIƻ1ϓ&T[nμr 4@_=~mZ3ϫ7j@#ͽ҈;q٪T1_Y&j׵Jle^Z;T~\SܱO_On<>yd8lǓ xa>_!WS|UE)wy턙yX{o(\@+W8ցzoA>ɳ/ ")c%V=p#Ɍ)Nv' h^Q닏\S4d^ŕkVfΪKKi &[@N B)߼J2!C 'AZL~=l{'뼲;$?t>DZ8V:+wWRE!us6VueC4KĢChR*7`3~ß<1`ŗ81cJ;"v%Da$)DYXŁ񫳵 KJP5q$d_35o1޹Ae2Gm*( TcKK7j }nywC{t#o5/#;%6_53G]vTT;4J;U jBӁz56lT<Mo:J+^sz9ߙvhӓ^_$ w׀h 0=`r¬>^%S tYO5o%(JzNO޼4cOѼ{#hip/dn?rfEˈm4˂@*+sd4:!Us&g"cjm;VOV +/\4#.2}FEٴitힿ7k<=ƙanm=nvPT]2o[+jlm :ԖmO7UDifM(q72#+VƠ]DxtJP |fnk 8 ˍؘ0^4?C3YO[\ M|~u$;|(㓄hY̺I<0,⬚m'7<˧TU [?.O2Ѿ}#m0z{}_E|sEȄ6 \dO K ~w:yd/~>j]0?tu|\YJY6n;껛Dt$}II>ex˅6 v[IcFICoX;w}_̸zO2`GqO]Dyvʤ):XoN)yt nDMxiBӅy΂gL):x,pk(nsR MCۘʡbHr%DC !{s m jǝV$`hC'62v5]6e`N"jKaD7rq{Jv{NbCv -1,c;V SEKn-"pw\⫯:U6mޣ߽?:/YZחo8v4OPءx`R]m$։J+M7mY1͎!w'Bڀ UpDŽ6&B, NHy;d< E">OʂNnG&jwP_Up^foUvC'q=&j{btYibc*F~QxԮ?i29Qgv!L"[K0*Oe,::QDV DډCv$-63+gb'V'jz3J5Ji۬Wi@;(R 6PKci/ jE׫ndL e@;^ԧR/*rl)Z(FҖ IksVW<ڄ" ?OC Xli i/ 1+suEAJ@P%ՖjˢQmWF 3{dFojJC}U*k xWR+Sb]mibp2`SCvP B$/Ur VuY\C5Ș(sGu._˘)M*8#cQB(mR"[0伮¯aRe"vodRYiPͨ V/>ڷx@jخ%JWBllɂ%:2,<.3*@Uh6ΘZڕ$Nc<?m-H̜cXXv!F$UϺ+qCSf~*Bvp=vee@yHN7N!e9B Rʌ'kk5icǍ~BG߲< *gG%nʼ)1T&(y|iB֨Oyf̟7 {_ X쎰Yqf݋UdZ?: A%2$}'J4'3V+U$2YMbv;ֺ0gkṞ!QO·(t:-Tƙ搽4Gc45O=JjO29Gq#!5Hd&V`t^_I\wgSJ=Etm7=3!OJ}u/)5 g9* ;?w𓇙t)礟R ԙ5h_8(1J=1(. Ig,2 3ae(`&=5 [ v9)LȄ0';2L-mtNg=3ˑ+7rkk0777f0ݦ5O 7/yFb B7[0!~ *lNY&SɞA^9c}&pYfE m)M8)qcD ϯ0p`M qb0Ew/*=ӉqyNǝKDixL@e8 y1AK~*KW_.%bY95xƐ3NY}6j胵m5Ȳw3.^;cO6ȸ~s8lsMZz\=8zJxsZ]l8ֲq񪽳oOmveR2a0ytejd֤LJZHpD#qP#*8gN`L0HOkN+]^W.DFxd\g>a=$1?M i/o;N.|l)֤Ӆ0>eBpv;$lwB)Qg6  x='I 1zkF9G{I;c$q3Xz$kĠ "*g`آB{ P_fswu!V0M5a~],Gb^EIN=,]3s}cg]Tt<{3mx)э{-=-`b]Tkn!?9*7 E@B]'5?ي DaudHe_ґ bB@m0 Iqs Y1bGSB'tA<Ct \PZSip՜1゠c;++ڨM]-سMLDnh@l+ #0=2L*Ub5,4+T( L_RFBP.w@`k ҂=%qrڌ'i ŎY'e9|afv2%wj,~ sxJv 4MYzȏQ71}B*afn3^IƖnokظ] T_U@d}@(kGlՔVk jlVk=NZ_^nzYc*.zy=\%Xײ|,b`Y6T +Ť`Bam UylZr<&RG2 ^pq3S^BTppq+ˆ4XCXS6,Ln_r߻C1ԓ"$$ X ĨBrIt"F"*( Σ(.N4drfi:I|I$FѬ#F #5vX6̌OVР(iG+wL#lĎk FPIb [;'-0`;X "`5e/υNZft*Y*`M$'zOJ:(t E _0GBfIo 0^0&a`E.F.gx *RZ)ժ}}v*( M65dHd#2[TQI)XP'5`^!:QQE=܃HlD"xTÔDE"8Ҍ AHٞdjpn Ն A(KS1?*Uw:GG$6hnj pS= JW$D) 6¿Lh1^F& AXPe4QSV5*p(R#D^sm dT7*±Aӂ (.PrEab954xxY{#G1G XDbGXCVQ2QlP0_+JMrl]uѬ#Fx\qrhVL$.E & 4B(XMn2jB -3<, AJVD -""02Hpp}P-h5笙E" NUeҚpE@&H#ER/+?=Đ`!rVs! \b|L"J4 F!\9S~YΈdaQEt-Ynă!tQ p`pX5UA-ЈZYƠ*2,0BM"h" Y?+Msb-!}=S؜PZM)(4h\hˑVt1SN w}چ[b\"Я] ,7NLsBܭOHcS.(cGRs]JS'OVcuH ZPFbۦ{2> yr>tf}'R܉1*ww' rBq3**w\qQAߡ7( 2Fi?jC5k` /\;Z4aTD)10'zO8 oTx $F׹~6j%+۰~K|agAXZ?ԌT@ )?R$TB 2J11Ѐ?Tc3? M1۲J|{Psv4I\8eB=Ex W='hP<87DriTTܭyۈ;>ӆq}Tŷhf^@}A!V@a3@2 ʭM\ 2 Z0?e,\ܲwZP1f+ވO'shWsa8U 1.%ģ+(:iR}H@7?x&D'6ӀӁSx^̚Qb)1S~_cج'6=i'^ji쑬]drd+ǝ "StiTk4QA|i#Nch*h>{Pt*aVhxq|u3> nѰ;Ʀ@i(K=PmiRW")Q M%two:߀:' 9~#I7 5`^TsdD[,ȔTl<, OdV;TeyL^P~ε:Zϧ~:{èb,W ˏe/gu .cagx#"q }x]8DX)EqÛ7}@pŪJ.%_rvnNv%ۆE取 u.W}@;7ǟ5kt"$7 6-gg8^z3jg:x <&p )Vg)phhsL.Խܺ{Yii=+N;i,{1~@h_|/ x#~$/ RI]|MI`8CI@c! U  AV ZH:LcS"x(#[f!XjoeW-DrT/3Д yvRE!L8-2s3"|{hcDw,޷2`U:{s=x|zֿ2f0wOK3'zZD.gw5?d^4?P('l(jL{98[P LeL+%26;se:tk8Rm^? +|eGSoHMH&-K%Pc3!)QXt(f겮v5f-s2_|J<(뗶 9ACRv "ӕ k?L+ua-{fye,r1] V +Zش 6-@.nk{͕({v8םQf>dz}ϛ]3v=].{(~GN(2~5ZuYҏ-o&w|Kl=OnUzr |)E5&b :}ݎI)M&4T 0UԵ:o'Aa(xQfyl<߭/!SIڴy-͑ уww̩4@ʜQ@ 5'W ?I)]\mތ~'*Ɵkf 5iduFK,bfFo.qFK̠Գ?(9NY:h;Ie\`J(' 8IAt"qf$u1=c` @Z9eAy$K}㈒gѣ%l3)w?tT=ԇ:N>> veS@DqbBگE]e`D I^{|jԀ͆<oLE%f^eJyaXCH~iJ0:n ڛ}'j: aTJxt.9 !11coVv1*)_ú6͗.^]'MwJl.Nq~/Xg>ITzyvH d:O{+ Ul[Pqӡ^!O~KWf R>g&VM_{5}WU5]v .C:SVZunSUdJ P^Q02-LO~K^\L层'>b9Ñpx/ `~M HTr;Spl/=!^GHx!! MhYLSǑ cfjs1Wp7Ē'-:!8aUGDzhqz*, S[u'ZU-񺳟_7;=ﲒz?CxӋח7¤Q?-}Y債zB#_,|Ob$&3|lR̖鱄%l@=!~f>T~faϏwwx_`mmYPebwT!Am{nX@zHǴF=tj!4 X,M46q$u*E(QX@uy swm#Ink~$vvq;L%(e6_5)"դ ےZdׯ%8qPǚ8oB 8/$9)H1"b'AQǧSclYa9<·˭+ hq[_Jש[4S0|mQJt'SQ\ 4 ӕީ3ƸSsms;%ƥ?smG!V; XkʣdXҁo*?D=4 -mVvǔQ%Q4QbDlp2 qGR,: iC+?/!A2!67#N3f@8C^ #oЩYWR{-Sp'b-Zhf 4f @Dy0)$3k%o'CRL:|)!%|[.$98NxHtń8-E"l*Ph<h{!)K&{! 8yƬE9l\s,Mo !]_- {̾;8Yxm'+ބ}Noe@bf_, H߈}pq7Y8o٬; il;庪jvߌh*M|<|]߸ғ.Kb+Ԧ$䉋hLbYlh7P -*{:b( }<MhUք{wc=TFsKQ[~u5{Me0C`WiM@H(V}|SbϣC[>kxx34ZnT7]{S0i/*o&a())Ղ. w ~([bxukb u%g(Jay7X).뽸 x&ˆ7yl(52wc%{5BBOP :dHKcQ(~6Τ*7*D$pڱU^N5RP$HԐPk!"JQRݲEMNj!뎭&u#kQ:'j:NNn7I/=*O<\l=SrDM1/͉==-Ag;g90]yMBEϔrIr)*r:CZxXj0&%48mDzY؎#I%Ɗ;h=3x2Eq'S.Pa4QbW[F`bp<&H0'*͔ՑYFAkZGU^SH]+8BOͶFi@UMC*mhFC3JGűY%{6L)liM6dgGgeSk8"R.xRԊ>zGJXi"uzZa_!Cjps,V<-u|TZὋrq{'6ҏR(V7]LF&؅tqhv޺ewγ\d+u)sky5BZC11OS"DQBNjAny*W!Z͘WPw2/26:)דl-^“!xLa0Y֕`.?1Hmvk.wtBQofo{T 0"Er{;[(]:ϼ;e xR NaҩnV4z1N-ԾQUzJX04JPIGd_]sZkGVRlrjƩUㄙa6@#-k|F}㻽|XW ;/~KvdҏԖ$8Lɻc&gPc rhvs۰Y^k9#p۬ vɯ\7fw[3Ry̺ssBK1ʨ4'N>n8; 릥eh&(EE|}E9 q6KzG?.E(8oS}:ņmgYhG6>s|^@~ϟO?`~f>GxrxJKk@Zq?^{7Q_^s<獹?x=uѻbWů}~l]]ɇ\'",} /~^AݽuЏ527)v/_&/]G#.~Yf{y3OA$᳒};g kD 8wu,ޓ4#+P.(č1(NCT=I($3g{Suuc{ś\?ϮY4eq0M/C`^DDՕqJbsᒄvge^p/B/ëL7ft}(_ӡϮ@/u~},g@uL``FEYQWyRz˜E>EhThg ?oϊWᯫe~jCOS `S [>j`/Ïr90o8@a&rr/Swĥcĕס_T ^ 7jG.x)S7]ڄϾK{k~Ǽf |ܱ|m"e\Gog]|8$FRAsh1.a h1hUGm<-IhU[%x-0N3C>]\qÏ(0w' x] .|[4(5+ڟ.=""_"_SebI>!R%e :+=PϋI+T`!TL1J\+3څcj$yBiJ&y" +l+oӻZ-g΋^|v߳gt F.|S<\O^O@HY"=r1q2!'(g+Ή^`υ>'qTyЯRRSF"VC`lThg5xHӠmREl{ BԜŹR 2A@fWRX>CGOijILIR&A(8``6ZIHfS<KK(f5-3 ᄽR MrXV*8(* z~e¦vw{2Vxu_sk^qoܭ;2Ro `IğHCFrNr? +gO*(vNv!&T[m, w&hԣ֠_7kUYM[W|Xm78`KћRVpfp ~[o ,n+P6A5ڭKwGwqr] _h#XE4om -~f{o]5n\+R*̰\; "S8%v򘣫Oyڗ!B/JH(G,^CCh0[boa@h `3^q5\wu\CkyAkY LnͼoO-qgwJ&msp ՞Sv`v^Hwdu V2S܏Xh{F;p6}Wet L%:·S# :R^)2ADuUwbMGc'aF4zٺ1dZAfUnDgmQnݽ2x |i% kq|hy#E\kcRP4#^OoTe揆`7c y~ۛLo%UR0\ܶ:Ѵ"t w1^ ┕'BreR&sH(Hr{dS1>5quУԇx E.L!Qɬp}{M]qR?+ͅb)#I%Ɗ;hCƔg(Eq'S.DXHE,Z۩HzT(uXettĬ5&O5mL dByο:J\NpK,Ytŋg+E՚utg S֬KxV֌޼&IN*D|1[TT B׵ٯH:bV>#I $yQ}:7lGL6M-+Ay[dw+BnkVn: =J7fZ/2 Cm э5,MueZq nCn0v'Q9bv6:'2X4ɹlo@ )\q| bFN[:Zv4Ѣ.aѺyl]#mud MۭtwFNV6:P_h^28'0հ%7Ty1{$sMpfƱJtjXh\f$[&74a[<-~XV\*y;4 $YUGjw1NU79W+C]7'aΝfY62} li؟7m'ëONݏ&銱:QnZ ˯YeBq,jz8vkP:RXQbd}k\Z Or'qB.*R3wh齖@RIJBsk^* WIU ӎHn HHw~k;qCU]2Oh7]/_Ty\z9hx$}:1 *^ ؄#e6E7>#_$s7,FNJZ$-WYwןC7? nd<弭>V*11' ꈉz!.?~}pߋ7f?ߌ&;gd~qd$>iI[!eNHZlݩxJC?!411_߾_;sh;|M'./mhGҩ+~j6)<Fo|MS"k0 HoͽOrBh-Ы^.T$ ]EH4a+FFȆ:D:Q:ıUzХ#>D" ^Cewt w;tPj=ؼdg&4.qMЖ>'S%Jcvh|ȔOGj(46^BB=:eZ ,w#{",x蘚O1Lfo!tΘp{a=o߼>v/._v~-2GׁK~^U/?@:^uyi[6?FXlt;KwֽGg7gֿЭ\e=/ [u%soͶZQWXp[?K(x'`-LTnl}Z8s}l%%Ѻ@-B~VajqJ^UZj։ kA/zwo=]LbEΐlwt1Ej*KGOFEk8f̒c#Dy,Ga$1DZaj"ifHZsB/s._ʋX đVؒT|B;峢ݻ|Bp*EaceF ua<\k̭7OT% OblL98ydcpAe@iK6D6])^&O'D 󶴌J <DQӑ$8IpbϘQοJXrQ t |er NY+#ˑhF i J u,]| PUl-$a: )0pBA$L`tMb"6&#ϛ|J$[n<PRJ  ∳XpIVF0@c)a8vG B|2H C,dgYpCE^ab! $XX`"ʩF>~>_o:O@0-4izgn_EXM,t>N*g'_՞b "MԛY=WL5^_]ܙ\\8`KOuNLz2$Ԇ$&$U@}A5ՒE;a+F^PXPT@I%9ǀr}] [KWJepX:za>Jcyt}BM6UV VἏGYP ??1DE)c \"J0x}N,Z2FфGwEQũ 2<68q,b"5l?1BM']N@QZ98c/|NcfDCoe0zѵ_B *xHyZ2 ..**FꚌjt9 J4 Kp|J ӂt}\Tpn!XF5< 'pוmf$x+B'~Ym<ˎY?8|Y}Ѝ}Dم[7_Bē蓝EU~]oUy-Kz*W4fz ( Y_]G#j-E0Wt>P2JV Ek^8a,I"Wߦv-s9W߅E "tIB$))}1) mX6K]56m#t|`׵jX ]ʣ;"-KW?4;Ml\g Վ~XT@uEp(D 5{:?x7,Pdp5!"1{NkƘh\z-qw$}z fgx}=' )M~ p*((\v}Dɧi%LS!w~&A^6zf8/Շ>8T'ylxиŻ0դivba!7qnjLt(n][4D!YX*p4 ]!I4T>  ĕ|ɎAbYF t5\aJ'IMe.b_dD#Fabhd7]G37Ux6mzB =|u] F2V^޸{g|A;EĀkNS^}FDH%Kn*1jUHa/N2sZG5Ѿpڦ. c*J3Sm[Bj;xMğ>!D5ve2U(n4⮸}>4$J TO p0R$L=>&jkuD &Ǝ=HtA$ްNsKј"{ܟN1ao!ޛ{,)@F K7[(uxÓkd^eq BiGzEhn3ά !NuD/.eg2Ngc{. >BI}Z 2xͯ,z$6Ϥ"s^ ́rD&@$rkuh.fqZz 4ce0"x8p~3j~0L~ ip a}yԻ,jD5a*9b$j{Uk 4]40'?.\v2sǍFq 1 y$MkOXN 'Nkp4GTv>41݅VD 7ogʌ P+!TXRɡSB\jұx$⌊nVED$Z|`݂)E7&cGÅ]Wl!+S%VZŢ"yr!{S"\\W@TN{wh*mX=&9yYmkq ۨ%$ߠ,c }Ǭ\uڽ,+Jw qeD]n\S: }fS޵6nkE{[|?E1-/"%:-~$.dǴ-[R$Qn֝،t~<9+ˤ@GA'c4!sN+ W@0nطiU(xX/ǐ)4 -?Gn[/ܗhPiލTZd0gSʴ^͕nt5@@WK1|0Q,-'#DOu&N[ /kI.)]H 8(udNf[ E f~EЧd("YuM^~P m$fM5gz.Xb^ 1"wTZ!au4öOi0 0)BHdXj MGMښ=LC:uZm>Zns;81s2, $Q,Ϡκ,B-,N]_79FKmNEK ;דӆ`kk*51kT"Hp[JہG z3 @ #J$]"tMvKaQf gs~Zg [Z C ȼ Gu b`ڱb=LkaHHM^<Q%sSeПVl&O#iЦn2t.nLGwuP2ӂ:ΐhRaBж7wA,,6E{6rK&mG"qebu]%!D$r]J.\׿DQI?sU m9.̤JTt$:2f*cUZBU?L\0d&BKP>}=Q~v?reR%+ϵt00<"sD@X(e_6{ )9+S/Ar򅍮cԙ#No \d^ }-My;euڶ=i-T$]<[X#AђP-2aZR#X//C> p~#!vxƖNv-=vdqn 1Y-x g7]#v LHXUN]0 XC;.;rAZ^wSb:m ~RZw><5'so4OC^Q'z{ mFoV_N5A]ȱ8`+Ns9u#7KމK/;mAeVM^/)弃@ qJaoNc5,mGB$1i9u]ɚqpьg瘼WPA\gh' *!skg[T{84nQK0qRY8egZ"C=A|a##jn(As*_*ImABHPlOF;]FYwJ2!ā5<*}Rȉ4J"B+̓H.bp4[U崳E% BA;|BSiNH¹gǫT Pvp*h7!}PԱY|?|@S1K h]Ū^DIUl DԡӜawY+!qDSxA0ifu)9`x}iPMU@&0ʨ` ssNۮMS\2a_"`Jއ9:PG:;քE\ @2c}tZе]ť)҅\d d Ⱦ h|kB;4#)C:`^ndvLN|$dp)k~,PmGo`G2d+sL)Y>!rⰻZ>-O>CuK @:-+dY1$P{I`nS`-3'C6ƶ_kB?>$y, IH{/e`~BS?Ȝۯ#T ħݗup2q?8I9#$#H) $kO&{xFhͪ=%`Ҕ3Bpr$ ɀtloFn *X8Pr *n׺a* Ƀ &?Kuە 4&rb+əOD#G@Rn{k:dm3^N4ev{Gѻ8H0H&AP\@BItn4D=\VtX!jNO6_0e9|2"{8lG[HjyCQa͓Nnݛ nj}]zm=SZ')̡K-8& 'M M*/Jo}jzxzX_9P=Owv0tf粒ehp=b"yY<-4w " V Zdz`=0vSά*#L22 .$±m߄$v'< wYOp!3:&InJ g( 3摵E%vU4y1řp2>v] p"Q r|)XV D38F64 N ~_mLI]i{M+75<;\.4D%TeV6-}TB9Rj~߫qk 0I3! H;<o 5e^lܡI(2Ԓ3zի$riuykenl,νiz C7ף~&m=Q[FQQ9Nykةv^hm9|)02wSs׏ƅ& V>ɝΎżgd{_-Q&ˮ{UvUww-׃wZr؜%ھE_-$?tqqf_J $k{'-;Ξm~+'VE&Mmʇֈ8|]iς.4-Jc=ڋV_<_7@>xl!o*x4]u|5fvE/>zFb1[xWVm{[z\ڿ3 }ISa-Cm?3MKB ˞k31! B`ʝܷ"+:lNVfJiqܱSjeŁp[-jY@Nt^\CʋLQvqW͋iIjW^9-!Qwc1Ihxe. YhZ`cwXi{v3nTzK]AgqQ-?ϼ* .,6̲ Kˑ((v.nԊjR^l:lFκ\;yikԊ(i|bMr `li]GKV|$Ywjk" .ޙڼK$dL"^KieGzGbO=25rr6[4 o&*f^2ݧ7YqR7xxܿLƠwM/A>ۘܩN۟ԃffU^oMԦ}4SzMu:F,9B ("!dQFQ0xć GF GqE":ԚP+PG8cf Z8!FTHJ/4Inj#_ToB Ϗ_C,GO3 o5d p3;R>3N[<ɠi%L.iO3 @HPނ°܂°:K:m+]ϟ&87J6as[멼}n.bt88^kUN׌+hóUIx4 JP@0FrH8Ã({OO;@~r5CuB2 ,=:RE|ÔlH)bPQ( LO^ӽ/sޗ[w.[6z7L[s^0eױپ-ۢI{{5PCfmԾY~>xV6mtT\fx}o$RO6.eolG"21U  K(V )_Ô;\џKUMIxy8T|}cofWRW W0)xspzyyFg-58>Zkc~]yY TCfق+cT2h:iΪ#b ZXhiQȩ^!V#Vo=ssT;SZ1jm:T4'g9 6B5 g9-nlU__8R[ZB, a 7c,i3𫉟qŒn=%l.~~wگ{'q}~e{/{O?LVzd=s,;ʟθya'g{lisA;qiݰ Fׂ}8H]!XD TVұ!{zL{75͋'UӥU55}UԆHmo@VEyTT t&^tsW㛂d4 }%pP3 "ħ94Nj<芦)uVϘ"T@%/y{3{2V!,fˢ$?򒔌!OLcш>P(O><|!q0b ;tp$a ra"H %@HBy6N#BQ$X@$-=5R6+9 _r6k|`Ti(l(!d/-Yj%ovd ˴dQJR4[Up3BHc &ke Eć! >n H`$S*G#`,*S* BV#[KhhI4`LmJo hkK[-N҄j{ cz7{fڞ +Xo=]ml\yj9[-jr9 }{ƍl0_{{ vqd{h+i]ojFze:EHVJjuuhz\ 74> ֟p_#TxU-,IM-%˕ɬԋIHX~ȋ,@v5hVq HeI\X00P[;<~<$\tM,%hb>?̤xi:"q%vF$yH W\1lCVPտP +lbY%* qahNiMJ¸9ksoVK`45_߆Q8ql`;kZ@2E#ZQ[{?T@Yo[}4yhTQNA)F+SGrRJJy*RaɄqB3d81 $y% 'K~Wסba ~xqpg9m ?&XKL%*&X~4Ja2|ߤ NmY(ۚGJn7gƝ'3oޛ}كײoj8}[ BFxI{Q uTh5l5 C}^Kob/B':iQFSohjq8=HIt^v?9K4FmD޸Eq[k5fr7 XHc 1y)6*MhTYR]ok8[* 򘣄d&Lk(P{Q;&˄cw+ N ;Cۉe9̶ۡ#%D ݋˗4(M>7|/ =kCt6Hl ͮ礅tZ җ4- srrレH,XLzsq3qB:9DQevr)KךmǦUu ӝ0δHZUIǻkUJ֢rcPfa ?>NP;MLh]{lDL{\JtGdJUt7IfYHmfZ872(QwL>C/|fOw>vw{&."O'ykVH:b\]d"mLSS:K,ׄ|&Zݦ$fֲn:11C3PHsJ<$(!dS@Zv)SX0a2-4?Q:j)&R+/c\ثRpb<(EYFJa'07R!UCU9 Z`Mz~/ӱ},pF@j͟9A+;2[sS w&+ހG? =tQ$'ō_a?|~xǺOq t)~gd&oUzb8|VpqFbr_&c.(Rb~΅wV"Q {B tpcꤻghE4r0~:D!.5XB".=JJ~% -*:HIvv<<ё37#o6./!?Y4l䓙\\}607}vQDWk jXׄ̚y>ua 3KH*5mqԮ.Y>ĺ2y-X5().0Ε3D=-aiZeh ;ĩAZ’ i:$čcq1Br(FCMx5{svw«j;b,bKQZ^ZslX^s&@3; xs΁yp0='3˕yy}}j% i BHgpii0W K=J#L3).jqRpTcZk(Q+~?ۉtO_V'1.P#Dop).?_(&a,2ϕf.Vdȍrhgi!! o_,iGVyq<2BbHJ<4pTG3rb LOMR HSWpZV]T@Zfq1% r_6<%b^]\TY\ MMI{MQJ):&r„'H#V+?d?,{]}ﳱ^Npw>s]bǰɌt>3ϣ@&? 0r/f8vUqżagÑKj;11d Z[3#@rBE,crie9`w-r*H3KoƜHiea##s e)YEĦ oY=szH{B3EeVIu 4MXx 1? -'bҰN`H)6"hVqn35n' g} s; IOAyB@ԃ7{{H@c fwXBfkL ,3;bxoŽ&ǧ/Yk N.9*lM>%>bs eJ7ۗ~kpR]0B%CtbOyq؜]tQWz /\lǴb#cؚMc4yXIl/>}؊C'^O5>$ =m8 sA[j=f0bu&yZMqLgb%~ x*6\|^JP)8&v:6p^O~r,Sxaٷoe˾-Zv,'HR yqhsD)֢X D"KuFLmVY⺍oX4q|ɥ(x9k#L ɸ$\Ɖ儥ԦH鼁C:KDB*a5;C˫@b81C z0XJ(v+--YB}ͭ4R<".-!Os{dgJkQIj+d*&Egu;xjNQi~|}f!c_>DϞݨϾ^ʇT^__/>9|$`/^ϮhF 7ةԪr(k#5ep$' t#o)Vy W Cҟ&ZmօNx8I0 l+w+]rN㗜~k%x5aRG4C" 7{WF_۷ Iff3%&َfd#9bWd h5YU:HBEV %1ܔTK6̃\? 6b[bGŖ90u09JUaR*±,Դ`E @4i aR2"J$s5^XbqY1.Hk5&1I#eK\La0 8t‚b%,pDII>VabR0Kh4uN b7pSc35;ƕED`E[{Ai_I+Eג#W2T1$%C >ɗeFaNQn62 %& 벐 |g6544 xO R@[㕥clh . HAi+b!Ʋ^H|ou~*sams˫(!|Su_+%/d._AAѭk2lw{C 25?=r{r9=8h/o(t+5#"G"E:1ϧ2.lMF^}}ϠGzwrE(.\!b*) jҦo DJ8<9F,[j/S{@sK_Bm7Sp{ɏ}0x/gZZ$ǧ[hNY**@*0^>Objt2?RLNG*"0vCι_-vxX[yR w w ,4IGxb)g$3L vet'ڀCkypރ=݈(Zl&hFo#m_k]'bcQwݟVdʼn1t9 lLSRx#86\6K>&Bp"֦=Zz_^M 1f4S]yZI1.=ֽ!xZ}KruGcl[U>hYn.oNb1rNc{Nd;⼅c|?~7JQIQ߁zgBw]&h'Ү%o+D%=|B߫uCjIKՊ1?*#ov?x+ǤހVFY,֒vH&.3GC#R0.Jm1HP;4Vm%hdiY&)~ U}%6wi)XtZys]|l'#/!> S#5>zqGxwgs,jxXVbcpĄTeHG' aDl3*ai",jCA@ld턦E:[(&Yrlx#Z2kت8Op@$>fYH JUX[p"z\. ѴB:0BfGPS1jŢʺ۲bQ&Yƶ;{c\O.?%8XiĞ*Bt̥P9[y͊9 <Ȋcq=v($tԂ eɬBFࡔu//ԂYQi*m5RVRJyͳ@87p3K@SqGUVT;r``قck1aIάa}ي3|$SgH`d^]0p,YW3bm^h,ӂ0ˤIp܄_Cjq5Pw>q$Æבï|Ͽ?_pwߣ[Y%b-1)K!j&ԌX3ꂃ4V2 Ź!Ɍ@8*.*:Fs *ҢЌcŽ?gafGrc=z7k5יցi/VWIO ';Ԃo4YPbdqiEbc1xωK M.ۺ(̓'qe4ۍ^m~;f恠WnA\2ڌ.N]PMZކo jqZ4:i1n'lгR*rv!XX{֍DGa2y1jYSL-[\B[ yTuoX1uA trǨc݆[L][lBS[yoxl*1kmyn\tWJNw*s"2 !TjIf WEZ҄l`ͤx1U3$DҺ2ʁbJ9=6(\#M͚;'tC6&|G+X),4($%ji趬dX BjX\@'w:mUQ[6֭ _O4VR CWS!MhI*x5cWJ C[fثZ źv-K/;j˓r> "^ym)ڢhxj D^]_o1(; X}z6M-Px׾"cȻ׏>G7_ <:L AD"o_oN$.r5 |`',W?n}~f?..0Cs!P?,n>=9W `}2bVҡwJD4|uV*V%4hc:>>_ڭPze,T"z7 -o#7yˇrzvq ;""4PH5;wݚ6:)I  f.e[iCݡ 8oFBGTuQlB˔$H=RoԒUrHZLH 6]ߊue#(VI.V+dh@2hhkVG-rN 6jGEڈ$EsR'Z%A6[5hZ%Gt8j1"0k;Vhk;Dh?_|D8 [c65/ht&cZfF$ ZԏK@RDg_]?Qj麯-Euݧ9 W3oWlMc?%*%p>]x7W| mOj6uɧ`}7@<s VK&3er=sv%$CCigߙw#(UP$Ǜ gVA?6i.*VB6MOr69_[W @R9GQךt44hiФX/0[Z3% oGQIaH2Ð\pc'Լ_](.sea;Ыɶ$!;׶t꓿[nmoP j6SI$űM۩WIkH$;%㜕PpQr-Cc\"4Q %NWoοF9,"뤊qޭĻqoqٸlnCȃ^\]a$jɘ)wQ8C2yi ,ddVtDbAh`HIO"&T<9 tQٻƍ$WzA}t&k;|wuJ\SMRrl,@ lABVQz(h"e;e(y$"`b4i@`x"Cy(24tOz 8].!l^s-4sWכ^Q)Ccs$p6{O@ ާ SpY&Dt>]QJ~4["e GsS3aP+Ae.rX"yQXtiӃLmP}@8h Mjp0M8fq +WՙX081<<㍲'=\]81eM$ǫ^x*΢l3({&Oê#L:mop~b<[Mc\&Q1yp'qXC Hp/;Oۡc7?Raz&~ۓ#ߖ*WkoMo;tr"FNx|JWܮw*ښp"rHn9A i)."LM2;A#+O >SΠE!uS%)J H$\*߁OѢPRN5>|T\K]{}uUէMÈd__OBBWΉ;xΧOª|#JqREYSFXlyF/ avrly+$)/T9ۛNBBhՋ򵑣mlD E Τ!Z"I 4FΈh@y )?l~H0Y[3sy 2)B"@4*IеmH t'ӚϊɎDjNFBs"L_J#R)⤗#u@MFX|l[aN:JtȼIЂi<%g6 +$&}w=P*4ɎO'JS#f @d%.p<(Deb3ۭm[rZ>hD,e})ӥYu-~ʹp{˖?Ї:r#{+>PվFfH?b|zQQϱV7HL}isP\uԊ+9 VsŋLi˔Lij;t8 #M8{T/(mX%\#+H\!@?}Pm(3ˤ:vV9_9^4?hˤe/LZel#whr:tDcTj "ox :*BlinV{K K}nuOn t3 RSƪ"Db}PTKCTda^XIN"0Tŭ##k| ߜ _͂j,|.&A64AK;9TdY)-Q!:xӡ`A^ #P)2K{k0[, p*MjHQΫM[E-DH$ oAJ#0`A2(YvHJf.CY4Tml'CZ>O+ ḓusXZ9V D_sd2kbv h,Xdqt}[uInRlf+1LJ ƶ[)P.d2WMԤȔRG;nmK~LJ{^,ǩpwU.x.kM!L/Eo(D3.bEǻ3?BO;c3_J+~vOw \X74#U>}{ps7iڂ@I3tˏ|=T"6d B|#!Ju@MD؂**%["t\FQ 0sx`[9x1qX&bADh[L~X303L*6c(΂Gnu1R1,m)9ewQ!ߊyf9C#L@D;kJ [0[ FΙ}LMYg[ڃM++)b8ߴRӴ>N:PI o'<\ [c݇B-bv$5AZ+A rHIӹcC,vT3L'\*$d , Yۖ-rqU` TpjcI\R9ӺI59|5B4ۺ{*$uG3]>pZo@o p6X9֎W'bel;aY&`DS8"Hyox3Vp(GH(7c"2 kB+ A+$a~68a"EB!j >gjjnraܱgYQd/81RZmPQHQX/é IGq},GlN^k| 5d )&|R*>/z*yZS8i)#urVr'A`sc Qȝ`BP## ☁T6u!b3Ŏ)6M#,LJ9#࿍&0S)P{~c5b#&N?㵨_5i\] ߏbr<.ьw0[L].]/w!;D_3^\~nGfxtn6*n+Aef9NN1WAYnEYE"A'(pWsx;jj5x]$mW[qp-gMR&sT͢b3 1Zhxc9/B{s%cm QK_iEU{Z 䮄F{QF@ йQ cF#~Q::gxDKPS KۖLO08MCXDwDrvRȅ)44b>qp/5{q*sj(`Jrb<#O)(zUwC:v| m:$1 X҄Lg3g*EG Ȳ7?lQoaTV?tTbjQT9=SҙS(Ym m fW }NΑSsXەw7vJc@_$kv rˣqK\Q?䍎n/`{xjǧ V Q~4[bjp?{WX#\yVE #QȨ@h5"7:%οH{tMYzhE*L.Y=MzWw\ְ_=8ʏw0[>ipKݠ+1lc0㻫dyTξ,m%Ue0mr<+G]~_b[y+ÍJ+ĉJoCk:UR_آt#,G˘v7HdfOn|-\IFu֚KNlD#)ҕ|*@ayQ̤yL <~>ª_C.BBG}+8=8O::Rz7X2ǷPeia4׎ X[,eiCsZbHg W(Zprx=f54/[?1YOo=S4RҗK9_KNngI.ag /|6 "p\[[[ n/Sr3g~0.TX8-t 189R-x(ٵ.+69=.o |WlKX-m?4-,G6CHFp4p%WET:f֧بLpX3 Ul#߃'r?N )5c0-Mg""qWN٥QWb6rJeyR"@ݼœiH$ y;1Zcq5h;(%%=<4(L"] %,R? `h>X &4*҂-4R1 R+% &+IBj3QE0~>{&gTN.ku|\ϑw[Z7oi-G> Þ@Tyt+av(AGo§,) ˴|+J1O;C-2+z $X섄єPJ25 ի1c>'Z0<q\I#/X9_\8 bޔNzmw2KN";MI$;,,*UgҰȟU*Qg=*mW-Jsmi^ִ=3 Fqle[vw v#FP6C^—M `ɉ^7r9gXc 9{_u !_9U69tI f4&4}@v!7Ƥ-Ï[*V;u2}Fg0HKvW'9, dVqOa~Exn*& o5եgrFrVzVb=m`ӱYq3 nk>/|_oe @ id2V5t_V |C'?Vª&M /R59wL /#\VJ4|3Ӏv3^HKu/tIs&_[q[#.~Wnr'~psm컇nff:Ozxu*-;%%CȠ@]w6{uik5Eԅc9cfjeJd )-^m0-0ba@B "Ado<عXa$"%'O(#\ l'sE].k^J``"p}8 a/mJAOSsPN)_]#! 勉?v ‰( \B@ 1&V'yY\ؓ3:)vkG2B3nMܴ+i0Fc6~=찤[ַ6biOعހfo1Ճ_,[][S_ma|ކy[7Â)iƖ(JX(1BX(EXٯA_6,܀7˧'6t3g:,Ci֛6g{w]΍9*A}z߃b<(]Qx|P$h ƕ]|Qh` /D*È4D6J0ɐj) CtָdR:B@0̡ 8‘ 9U2^ U,f!ZLX=ѱ ۧw qqpl¶Ib1 (dirDưX4{+Nr˥Hj8Dn帤ߙpԏss8PӮ_ohٰWCSt544Tk^E'] H0 #$h-Г;ܱ'sI04\!1\`-r >$6^V,F:S3%pGƛ(OwӨGQOHԄ=X/WhHK彗MJnB<M$e&V2zJQǀ'F} a 3_ԘփH6$]Yl9L )% M!\3 /o$EYX*g)KΥ2JH)%Ԇ4Z>AYQswkape -SCKB CHK$0Bʿ  *\-KJ-Ėx(i ?0@T2: ^Ar]| H=?ﻳkg5iRߣ+(\XoGBJn-f4sJv2 ˅|P^ g>ncǴ2G]:JJt鵯0z/1H&"aqay )UttA\&hxk!UCTZŲJ8=;1#j ``2T0HFT0}V+a[{as!9 J߆KD!}A4*\mBK9džI%/\dJ@W()GCkU,J )+i6p4fl{[t@^1JJʴFItR8ļdD=)H#pHD>j7"~ ( {:(3 kͱިzJ /RQJk _4zC +r^ j_-`5(%^8+oABe(WP`UiNLQW'Tjʔ$Kdq*3iЏR{i@bg?\%MxJ o(PۚNGZ2kJcq]Sj 4$TbSo0  Acnh ((AqPF5X qb1J[@ _6ގZq짿 .tc*/vk!> 0e=o<.V~;vHLQN%%ހFn+/wl>_ jb~\eAmS 7ӗ.*S5miCq}0%w"Ի:j$-zli 0/=>!t6T,1U?d^mS83vp]~"ߤ?ߔ9FȘ_Ok;xwݤCRav ɁJǑv2lJ s/?2I)Դ%23QX(W0M" F:apφ'$$ȷhl6V(k 0yX:߈էG7](Wn1upMxl&S"E&':%MRk2F*qud1'#Jb4&\9nJC4R$ ^ؓ~"MVE*(Ow[gg5ZrVzwo~Vku8c\mL8A5uW{M jBW"K)షٿ?(15mjwTJ].zccv9V"L8{acx[G k_ĭl#qQ黻Q}I/{( ӻa[fϋV];04aЋ&۫G=Ab}awzQy}haGuk<8y.5y{2F0\*}QCX623ڗG[5ܜl }U/o-lY^jttxk!9I[O cR>@b; ln/i~ XyRdf;;b3Ηc>kubYAsEv/D(^owX>=x!"1۩gYÝmåfGL -am!1BLlAz%0k_y{fޯfT$%ѬpofHtWC Ht6=8 +PpȜHӌ~Bqdbqp =`GY'ž|wy!< 2džomc$/t֘ CxL Ry0:T(4>2Dܶ=T89ð`NJy^C m连0S!_NH7zE0ue; 142E%߷PAie&׿uzd r3uVN"UGi5P+O`s;IJ&#\ e^WJw.z;&8Nե1d7r@d֜<Ky lNn%0"eM~l_@{gxz=Xj~:9݀s .aa۟WE|q~܆mfx*T I,ˌ\:pR; 8ٯA_6frGg̺6t3 Y ~,=nӚQ߭7mb}?[΍Ɇ9/DW8m,(TB#N "`Yh)Uyɠ(S J[̛IĊ&gD&331/ zި4**{|𾝕]/&\ #`3#IsƳ(%2 u1j]8v1B| !giiFDV0ہ0=s~;H|lcM(6D6(gkbKyX`tK>8c9 S?N 985HOpP(+焐Qs!zhSdnl$+B یUﯵWR0mA/sA?@/4} R~/n?玚F918"(dJ3'EsrAs)))ZxhV 22v@rmĦ88Atbgd`1qOb`sz#6;㠧ON 'ES:"9S Y(εNI$ꙢKcH0 %mNׅ)nf =;#LMِ<<_ y~Xx D%+NQqT0@&DA,X88Jggj&gR<"AD&9 $5쓳R|*2,74B"]RwX&uC )z/ۛDUP9+Z(_]{Mq?5߻vOq/ %(5B"Hn?~} NͰZm.ٿ?_+,CͥB Wjp $+QJJ Rfa1N12Jhoj~_~@P3o2^Wh m]pGL֫O?~C߯Bm0pZ;cbUk!/+{IӴ5E$Sx_i6&AΓb@h%}/ @PRE{5 ~bJ%QXb '޵q$ۿ2`3RV78ay- VERDm=&pʒUէThњK攰,8Љyh+{"81\s:u:j!0Y0CCSp *9ʢ CVxU;$O\A J o)UxuO/H 8 H ZL4:bVA 'wazEA{ F 5 9ƭ+$Xb  +1Ȣ&A7/P&Y^耇Tp)g kax#0Cx]CǝC>D0TX0H}D. Lq. ^Bj1BgIGCdo=bᰚӆ`0P A$z A{"!ZtG?U\6_ ϑbG ~&~ ?!brSǯ'RZU>WkF4.niEX.6)ͽtOo8N=;gdXq dhspzcRu3Ê!R'pg Dm1#L8z䊞i&4Kp׬= S{mjC+Wbr^nHָlx*r<>TS; n 1 '󸀣yQeQƣ}(Gp^ǾW udK1偘Ҏbm7Nz߮gEmPLֆK]hAY\Hx/Ή =.~[R@a5H B0٪A\P5m 06:(JZhޤjA)HHt 0"OpċYG2!!مƊ^&F ֡ G]ҖI>RC=%!Lq(@Kel=@`]2h`d< #B0MQ9_9@O p} GER).!G:s۳>T|^ ȧgވ4oaZ Mn״Jj2͆q~kEdr6\$u :geBzޏ痡glh}UAp7cayГ|0Gt9]Dmܳcb%3cLDFZxO-_Yk* CE|j[^uIv\ǐJ1XuJIvᜮx](kf.<]ȧ${UFM)[)N=6n;UO\ǯVޡfvCE| ؝$1dR q{lv8߯Ω]v+A[3u!GE>yw7n@ţw+`M{lﶻ9]D}.u:#w"q}e7ecn:$pNW=(VGfvCE| $a([)kmc#Ua8NU(>)ЁԶmCM>}R q{lsvZ3u!GE>VbcKuNNvIݺ#wd=v~J ;?N#Fj50w ]F]VkweFBZljF軏.#1F}ѪT;U:TYZjħ=@2t x`@3$;dGZzj)J3M$o?WzeHJwzW֎AY[اUZǑ&&+k]FJ&^Y땵N#AQqxʚ0zeW:sgWFXZ)kd+kF _vHrxo LZ VӿW:!xxʚ;E핵P$InxZ (9C&mh9fG…""a JQ[' S\H 95^|[I1aHЛ YoiVߏ|YzVf rܬk}?" ٙP)"/mKos!̱7_WjB׿]PPIa тHeA$|G$28&0l`N*qɅX!0aId&a͕$x͑ZV6!4Y ;qx\[w%咀dy׉nz!n~b~~$#s5.Pq"P@2Cb.KVB)v-Kc pgء6j/x22F%Ea +(&.o9nq`0埛@rh~uqigq?\&ֿ]D ?%yZ~ ǣ/o|VpbZ6x:ͯ~"ǚwW!Kh}Fr#˟ٱYNOK !OsiYoK 9l#R)ߔ'=Xz?WcRSt)~Ltqs'tINPB,q 33z?Wr#9! HрUKd1_y" #C+'^{!R2mG/8f6%XBq2/Jx,AB x A4D XaD6i@1aŭyJp lRYZB(!Ԏ ^1j@K L[E+S&jJ0"N)BЇ!QAܐ(-2GIpXJi#86(e]@N 'Bh<}O2tuy|`>i^܌b2~W- \=i,W;_ގףF~xb xzzq^<:rA :o?F0D$@4%`ʓ$#7q8MX?NI?vZ}'0z? %w9U-?NrA t9؎e~=]䶳lX,9ɛ,#X̌^|SMKV)XЯ֙Я͖/|g3W՗2 ^ [br ^Ը55[H1s2{Ե D]E+tLT?{\w0Dd@K&\vyPٝd ӷ\>w}~W}~_mzA[ugHi'i2Tը)y@<΃k]}oޘ]z7!_'}g6?6 Af!' 7:?$`z9HHiM_$w<2N{i"΀-Ed )#dK'}bՔjJBSdՑ#!(Kf%&˲d\bjM(bPß:`} s!"EdÔ't$bRgV0OuQxePQ.)h=0^uٮ^u7f88I}2"=ކR4Ifswd ɌWK֕7?7 :c^5ާ;CotooW d?l6߼~ TI(8R}b{ !ż!n+{AHgPݽx撦}*^4 _$I=ӤHV ]Y.\3c,|GfW{4j iC< SРC!~iL3WL+ 5NH^RT@RP5/"e`!^ÝS";pŵ5$jbl)qR2 B1-r3HJ4.n 6,F uZ4^YnLJ P#cJȍU*5"y7KЌ aZo!\f#,rg#ED5Jp`T?ϱc۬t32UyhF\Bp$yhr19jyYkV# R$>eM_>6,N+-a3R ÖcEzw`IBM&BřS&%f yuzak,W!W-^dwOUc+Sߩ)ަIh*.tqx:eoR0>浹\*pT;+9hto&lm;K2 0d*{&74-Nm#1}w~^_-=2tHĝEABWCaJ ؽ~ n.uKWO ==(fbLeh]E2A]!`Dn-m ԂmY:$>x?E_=fBC6޾zK{]o.yjJeæ==ܘɏV7~Vm%@'ƶj3§ʲ?*(*-͇+:R#+$;eK3︯r}b]v*\K^wnl@\L6~:J/d;@J';~r£W^.Oepom{{pWo8!v?\MkB3 珕`Xh:Y|!%.hL7!)*DEb*2Ku@x@j<_{:OS4c|h\r5V$1i9 PpVQd$mYiIm4i7^jkXʔƚyhGX(D\c 4E|a9+ u%>RSȦ/w]ektgUiP÷&],55ĉiV(@5w#:!}o> t5>lɷ 'c<_0Noe0N~΁Cx!y_G),fI:OlO\ww@as{y>E !lp̻rwb=|&dS/~'9n^wkA鴾ƻPN;c{vLB6|&dS.>U #m½Skrň5++dXA!޳(FSQ[-^P#2T!P1 `/L"UƲ0ŖP2~oF@bXe1%Ft;e]։9`IE>q/H"#CxS?;,rNpjy)BNud=S1T].|cyTP9iX{4FҦSnsm:u"d֌~5HtO _M"dPYOS CJEd;2@$TYJIVԳFv*JZ5B{^S9T9҆&cPxAPhZ OV fḾ\g5 cuq&x&G1SG .$Sq1J W)P`Ȼ%H=ZRF5t!TMWe֐`!_ؔ"zwsM1gO +Q3_t*t? `!_6ٔR8[ ޖZePb:nsfBtޭR6|&ۦoeΏ|ùB7K:_ͭ ͭ# ^% j`3Nքv&nvxweZwSpRnָIѬWZ{FV>`yiX>0;޿z.Nr&b {h$A1vBf+* ͕R˿.u"P^60"I> Z$:#D3CidJFz!ҋ huX#MaAD1Ӫe&ֺ@QT):e%׊":`BbL%1`"%3-&wo\J lS(07|馝62.5p$E7?3Aְ$Y>r ؽ ^ɟh&:̓+W3k*e(ߵܩ94( tinnXfqrP7 tr1 , 93T޳#ℎezAN[i@F'ٚT4pʺHE!^wwǶ~{nop$o wҊğ]}Wf1%:w}ܺPΆ"Hg0| &R:T8lk~yenv+O8h-+cr8*Iɯ)VymeOmi{\ƹ"MyNjT޵m$2Kऺ& OY8 7''B_mh$;G7jtr2GU_ݺʌӞ|qYomrL%[Iq|ܟ4H(6ƕl8γI֬A1Yb4z5I=ިac$YDggO9|R7֞Rifu$T) !kdgu]O?"\MDk[M״.WGD͏d foVZke8l*irT e Pʉ:fw F -:oAxǔ)(e rx]W`q}7n>Jwi(|C:LK7Kdչ%xmFŬ3I >fD*%P+IY:BC2ϔua)ӴTb)49tT7-[6D.UljDLs%0u0I63eBSLyEdiT<6֒luc/aJ@DD:9]qu51\dd~2eK#p=#lX0X [174_o7w_ҁ^mFvr u ˛7IPMNC4i6g;={g) O'+q;gݕ՗J|ޞwɀ0J|w{ܺ=շGe cSST=ۙ|o݃YmQvGwܑ|O逤h4qGd%Œ,RIvFNE#2Jǂ7Gz[o=FTҏ*mY&/NF'Wf{SWkWD)"uFi@qvy\HR̈PO 3eTґ JZa@J,jJAxf*Mx[yer͔)de#S[ t&Vj ZA׌ 澄ˏo}Za;fC #]Eq+VOKP&rdvEH*`Wf l}JlIwGO(h'2̅9oPnVbw]G'}K&6J/+"qRVW7V~ Wk9ȿz!qɴpHG ^)Y$ Vȣ&B[ }O%$ijRR ( ?ٻ0frtU@M Zfk7r‡mށkaG}m59 7TbJɑvfOfS9j 3߄loKkkӁάF |$0{+K6g#=yD&瞡0D/D!A9L~}@'(4Ü&=/AdGnAnxޤLuWFౄٚz_1a<;&"KD K>hǒ9{r:Ɯ-܏f ~31W17 Xw6-Ģ7>eR9F& @!bZ+4)~hY ԭ(l<]4u_ӠWVa9ni"vAfjws*;h 8>}{37}Ґɘ*z^UXQ$ P=xMvI5Fmް5o4HD\qS7{KCƏ/h[A;^\VJÆ:iNxX;`l``bWQ=iS;᠆@7ۧ7{K떓^Vj4 &zR|뽟 k#mT,yUȍOW,7DA1 nbY3W:q >@.ur܊lFyNPȌdCVh,=y!7sWMak9G%ob;=T@` {y,Wv1Er&'2aSd^0_br2Cc6u߉9FVt `%Q7؊NFO:ך5Ƣ?N߫C܏GQuW "8Z.׽گpy1|U=Wglu?^NsٰzzXoX{ӜS`Etbrm@dk sȍdhfp!ƺ"Kւa dxuG q!'mzrpݻ:wM W eϤP:@rͤPyo'ޯtfnpL*A7Xr@[Yp^khs&cm$B, #&b%0<0БRnnmCK<e:A2E6بgL*A)-)d1HC* f" Xw:k58"Z~ټBt;5o S3;֞kmҭO76_)) $+hGhz1=p[71Ԟ|dgj} rэښ 1>hO%_ة.Qa[ӕtcY'@z#bo99,0|wL]9-bJkN-s8DkI/[JeN)CX֒MNYL IrQKУ !s.:сQޟ9]zmzgtdo/Uк+c|y?<\Z-o>%zď$G7|VŇf,ԆL"}V?]gg)ͅ5qyrBzk ODxXr- ѾZ|r/yۇ M-%/F~,mV8H9!s SI 'tyo]|LG5 gͱ쑱gwvp7Ew;#\M_HwT҇+kP62?ۛDnc:z{׭ajPA(#u76 Ӏ"U2gf#%n-Ϻ3E-YQ]ouFK: `rš [(,b/X San/^'P}JI-X,)\pMu~E'_g}d7)'F:9_8W1ewy|Q:};!R~fl|;&1B+* FHAwL[Glֱ? LÄ' zh\!:+f`Bf`g PDPc}^qr"gZdz3fԖu1wGO) SN_-846,/8sdz:ԟV$Żt>?߮ {Vianڝ\)5G|gw-wK:wf(*e8(I!<rT<]fȖ%YӋ^PF2.Z],vhy&''uUX VF4Jeם+'q[2pۼQ[l:i29?V$ էߝ&ZWoO>d$HF2Rlm4ɼcqGy]?,6ȕ" Px>Y{(񬊨c2bLf9!H^c761 AFM( ٨ -wc1W&ZE,ZSyEZxL6Oe -@ eJCq;hs#f\(zW(ds6C8.F6 g+ ،G}u zZd;_sT1 Êǂ}eGš=X8QH04{(Iǫf=:VQ($UDYYt<N+7IQ /$A#VINuUϽIGji`;_9yC{VX> #z?Ԉf>OCGFvdB%更,1ȴi ,DnSR ,Uz5=U6E bERj*AU1 Fߔ]mmm@@|+t&oаb9 3qȵm(|/NJa{ڳ}Tה \@6^RkFѻ&~rdA<'?b->/)ZɜXs28=AE?T Ղua]-xج~@8?:%y9r0i$0$1 CBQW1 އ %X^Sf |9X>??]..J_t^H,.mibx駣wX%3=N.G)&m3FtaiR}w[sdJ64f0ӫJ7f\ӴuOxB*"Z(^h.0$~ȁQ 2+*)f؆.,/OD1\y,eC(L)T2՟#訄5}l)0D9z^dz/H#IzG 0RŗnFMp [. ( 8Z--V)8Dra7 -,y BZ0&DDLP%DW2֑6RLk R>x3Qݮgr6LfJ&s*! 0/9Őy{` SCi̍066u| " -D` ǑI^Lx0v=/^H&JcQm{}tC~pwK$#`\$#:%Q #X.Y*Upz)b4 s?dJ#KSmDdPgEV)s!8xh;LL}('D08f77w76P3w7MxeU$*V_hX[+|U-4ҊP")P}1C&YQt$OCRyun3ևfԠ*k;I$[5ʹ;Pa 2;]!B3RBFഩɉ/^ TdQ1#B]+9,K O1Y#n)ūX9_.w+0;ճ(qmA$nPGSTMP.Rg0/0KF0˨Tխc7'Ck@tV+1gIUΫ9sZP|pjR8//X'9f 7C0 `Ԭ|Տy:dI2e0"4#T84CȦ? fnӭ#zn:HwMbINĜϸ˨G3iEeYˈeav$SN;r:׵&՜~ dBl#RYmS%Jijx dEM9=1q]PUԜN U S'VUɥP ys iTZMQRp"RjfޑkO#NF F̠oI%XV>s:hEf9&+i2w)H&&eYN8 |`r'JR"V::P ?P >h>1;v27+,| mZAkXX3Y8Ppe1D c[%19Ǟn"Gbr$k9=4p j"O .LocSua= fs|g53ދfv[Q'}eo |` f9STpo_G-oM Ļ%H/dHq׸5vq~p0-??@z7|pOV@iQh!t([nR/1.lmXn?:8Ê 7eD~YR%Xc)V%%#B|3I'%Rfh# RԻ)R<ԅz3.P̀i7;0kc鱔¼R39@zq=u0{bOf|S!:CZqd@WfYmt/%yLW[?OgK_Jl6;&[Lz1{г{ܗZt}6ޮ ,[E"gpJN(ܧLjS)MmƼB Źr:(g,AVg!$=}֡hz W؝"}Ajή'iS jWZށ{HmE!yKK"]H 1&I৳iuI4;/Bl8*nt9 &R;oA Ɗh3Pʱ\>5Ax"\t?!$#a K'}.yN@Lr}gpR Z]?o4-p-z S/w~|>ŨO>wm[iʷlegYػͣd%v k +vTLҧ,/IZx?z>,*m)DiCx׮fC 9\, slBz,Y)>;YU4$?*&gVs ?Z z !fԓ #yZ !piJL1eV . ` 0\R3&'JP`Ci%fs*l*8U7ep/ħ.)fβTQ IK!@AexɄl,Yٰ)B𜯬AQl% 9K0tL)xxf%8^P"<.ݯ*݇s#|j¹iB!=^Xn1/,뷾}9*7ZR-n8s4 ^(W~ۛ^6+t>-k{?˯{O k*Wr-5c_F0\I}tE*W>A6|\@Ļ%Bűi_r4oG7{6z}'3\$ׅ*8RP!ʿl|3C1CixPDt[nR/ZQq2ޤ~Y. xA8a> ?\9 iglt*ްwW|I"D:յǧDXɰϬs ) (CRaO3$SJW;fL2yʐ@jUOś< qFCB_Qf+"{[X/8<&JYϸH6a(eV&.F5/c\%x+Ιթf\ PGQ:{ v&3,^}`Q[xKVW_%T?{W۸ /A6KX/dr|@v7Jl+3jRjD9E멪)q<=؟9/f _$d#[ yvHnj;y8--}on QcIZvB˄ɼ<o&ڴ>o5]~jG7> MĩAn+ƱfI1h*>fpL4 x~ա1MRJ6]ZHKk B mDB) F}ZvM#~\5{MZP…+݊g뷈ZK LV-^ ,P-C 5Ja:{ eeOJa4*_hp^f;IR:JjYìLb4V81 Ys",MfΐzZA\cm%Fd+FڰL$1MR8lFt)sP,RfWKT'G5Xl(PVB"Ge$qgA cLF3I*2J'ڥEqemH3f/65O()a)-4TeTIF,EƊKBgZN)긫#CxY6Kv}ȳZܙ#_E{]u&?o,C^wWX R xoț׏  Ze򗼚Ʒ:y{5 +x5t6_([w|\ ?};Jq1d AEA]I3jkfIsO ,k뼊r?m:1$)੢O\Gn{Ƅp+?)"^*jwL_'"O w(c ߨ7GD$q;?Gّ'yّ' f/01L"1DRƂcBeT((is.ƙD8FA,ُ]'..7$/foWDu"@jQ\_6+5i<ƶvㇻ(EƝ PDžkĈ"qrpELLx` PT:umn#Щ fEFV*ː$;HR sHR`I%cAn@ k 8$Z8dP,U/0 #%3.bD22D$X-ABѡ gCv3:DA9QӠ O'|M]6%̥TXGN7 \BwpD:Goas%>Bֽ7~&h-~w+Eig{BTt۹ߑXH3-~Ya!M.ЧEMJ9OYϦ_F(Pw˹OOh;VW0bQA ~n;Xt_G >5H[,[m^g%sTo9x(. [W~zRcCh~BޱbA!.rB[S*hE\2}=tY Dws7-wLvXd^*ۻ~0z{a||fT}럒x0Cqn;.O3U FXz2\|L dr4ۇlTO#2XƈjOip1bN&3.}ZR3.TxA} } Ff&y$`Rb 1VO˵/*Ӎbľ^ړd-6늫&w@wF5\! B VDg;c(3(B&*I)J1J9<qTM3'Z10*s(-w wb6RgBhP‘rJ23s7k c[e(rr¥ WA}5yf6`d^!hL8*5|C*S ¦J[_"cĂ~qlaJ jXuE0~UVi'.ATQrzʉߞxK#!/}1[rIk n{{9a9R;0rok.a1K4cZ'H 5`%vr/r:h;$-L,M>F碝 ȎNLݗX G1D0VvRCB#U8i|:6yL'EUAB.W 9xN/ dbZ%6!D䗍dJlGB2;,Xūe[i  ')0e!Mqd@;.sNPu4Al'UdnS$e9-XV}W"hf\|ǼDHe?ՠ[*JdxryqA( ' 5\ڔ: ig Dq,lB4|o^QTGTsdd)8uYY*p0O|x0R Ef30,6ZA$&\&EBVRŢbI5 'X % WQ\lXu*T\|Za'$ko +^xpc[{lW]׸.g7tQ蔴'{]N  c]/{||C/]nV܅:Tw:)rI[C[=edJ+NR$vꢈO˺|(O}I!7#+{. ZPzCCvϩI{G]!:xէÖuz[?>uF#D\Ox'4(Wwחvq)k;fUCq+?x=8 MNɼ|„qPhݝ6:7?n9P4TK.It֋P7m4:OoHhMdK2Y͑M֗9lW]INjo1r:y¡%:v"a Xѭ $[ƭF4%n*<]Qa[ξCr歊bo:q)'p*\_<'8jݪHh#=#0Z.s _y?69h~O!WPLK[_#6uuD$>۠+_$HK5ܳa'uc D a5x 2BwN)JxӋ|n'S6Z/^𴙪 Y.]LbigQ6F ցh!(K`R4^ c2~ܫ&s~$f~6[&}m\| ?翨ƒҿN,*yw 0 !iN^ f79c rY)P%oܼeאUpgjsx9)s J`+{\`[cX*R%2_R3hZ>#4?n6w 8I ??jkkk騬7xL*82S&%J -)A[J p_'H8)!4Bg?w9om2Zl6zխ(_,6en}#elk7~J hZJc]Aejq$Aa$h(f200_EXBH#Fݺsev4ZǞL#!#y`KpM7*Fc(`41 Yq| e1+ó(e!eGMӭB*uh0ā!\m̬DX+\M5ҖzzJcܕ镡DRrٹH=|$k"d72^ F%z5Ӡ:v9x'DXΜ |K )p&Xܱ2$C!.׹>ToHv~C|"zi&bAjDKwrŢ,~Byab6zŶ`,8Ew3xuqa y0_-\0 .RQB  SC2f45!YkrQ%%U 4F4YaJX&y0IZنiEaL*p yN0Bql~0#ZJ+5ݳS0YZ$U2b$*NXJc*U"cZP9Hx*@tҽ|rTT9ӟQq ?Չ+#), Z+$8g b#LlL;\t Wl+0J%ҮB&0wRaE{ JF5% Ԉbq l ֜") 7#Ui`[L դ“0`繬j3!ZP@os)DHhV^8ED3J<@\''ڗ,#ds`%M&wru86q6or/rLj[f}2~@r`oFLYuOv9~|zs} ߀/ﭙ.]xbe4n5(F}^t]z <3ĵ`5ef?FoF6@nʿM3WVH)<KQˁܦUf@FY䡺M;uf'l& rJEi0(B@<4Qb"1C00EP@ۺe3*L04lxJOe] w@3\b-$2IdCR k]e4J5JDxwi/2#nPȣЖ*t4iS&9t,k|G D358\le49)2+݋;vnm'*vwaar*? Q!v1U91]ѕJ8Rh <3qH[>L%6XR\rٺUMM|]f G4}`-h8|Ԟj8/`x;FIb- I`DP B qdq%X'Hn@%`];k* A@-QD[_ kCVJ,5 TMD)3 체 B@җJ;U# " $W7x)wE[X+Jcq>,HCyp.\Z~̇Im!|f~uV4"yi?>(#➦1ۯ߂ϳٮFwKT~eO3>NqsoLxp/'؇1~xG!ϝaoFK+iFw_k5j*L>%P?>w>fò^yRJۜ+OKz ƭ~OӅgȹ̗v}ʹkœp5hbz|aS <ocޟ/-K 1WCy+1:7Mi:X9gq -TjgTd5l= n6vwkޑݓ KK4'tvOd|uǹVb"KD*+b[Fb1| E]OHHHز#>$J5zjrtRQR5RKBg]t׼N? Jp!vɻ =A~H5"jv W,Wx5@?_Z4{Yn›cfZ7V)C%K e%J;N_Cv,b1= ٱԹi(y0:P%;Kt!ʏ=nGS7W]R𹓯 %`0'0#Vz;s:8rQ 1ibu xv cSJjUb{HAXwgKu_DA^#2z"֩{r7} ʹ~thmڧ#%@ݤH6r"ʰd# 8r|HmOr۳ʎS$kN (#V5ċSd^mM[D2e 8mGիOB!7?[jIjE^ULU$3  hq4EZ*: ʤDu bކw`Eo#_9u'[6M/q*My Cގ7xuzuY~./'/ৗS^d3x0$ks:^|jH 5v7_Guv#x {Ζ ́GT|lEAFאp/-ڹ|HQy01, i7ōBdN*sp3^Ċ 2XqX?Ƌ@eɨ$xla,'4tc ?yE\t~z L9r{.磓>f+/E24H#%HI(PaHJ%K8:q 0rw#M$*^t?eؠGd6Y JW B|rck#("RR"đHPj&!01BNoA ]zQ]2zYؐQE~%l"@đH2ĉB< c& 4!FdžƒkcS(&@$_sWa}dm~:`"q˪(~|Y7|XI~]~kHvnӏ>M,tsEc:=? 3ؕbړ|7O;SX~qگϳw=0zۣ02P0%Wfݿ~Ǣ ϥtg>O(Fi~8޾]Sy)ʭp.'xK״k:+crN?ذ#xͤMYq*9ԣڞdyɺ! ݙEi@9 SʒPOHELthW- M HCMrHQtHR=`ѣ"ngOd%)5$"cJ 3XᜌUn%r\_56|Y{9++|Yq,.4J J$]UK9W^{FFUʡcvYIM.ԍm7.*.^ j}Ehw|OwQT|x"M\YEb}~a1QA.{WJw\w/j> ܙ կ߷ѤeGmKM-p{eޜ1C~^NUO+: k9lXstlݿGOkp]&,.2z2";AN̯N=ŧluL5 mWU*s\éz5M '\|pU @ r])W.r9tkTfߺW+KO\])\E<{K?/Y tu:k#A;Y^-ujX}XnҞ^BXcx_3wp1X/Mkvhhc@}/!'ɤ¿>Y3}:+^Is@?/<{'e\y+=2-

t_͸U`Jn~ll /̟na֊`SoF<&e2 `xR[ %b{~1#CC)5r*FinbGp˧h:{/|z`T̀_| 5'jUu=8y U) _+*`smF[O"""$gDI-%IDha-܀OT2 8F'Y Ŷ<+3~x>@=.kE3m,@BzH]`jv߻ffDDH*Ӕf$ ә C"6MA)3gBXJQ8re32*˧@x4I!uá([OFVϟuIA T?b^|;X~CH`Q #4/0fZ( L+B3IOGC S{c1T bo|(&XZlG#7n%+mGRqǏ,Jil;5$f>'s6R%+a*!po 9ߟh^wԼm-^r7?8ĚWvH=؇ux$/"J/tD㾪'e 1?@b^Sݠ@VaXː +ۮRSBY L?^|b+Xy綺@" , ++a^ ԧn_4mWVwT)6[{SeWM_Ve`6hx #.-PM{1Q '"VLf/yBx|l^z7CnC`RkNȦH90@ssc:Zoc&ToJ_'u?\:E{@|~qj _9r9r9r9s f6Ƅˌ)dI]}M1sn( ND*LiH\Huf"m,ܒa9b˙ۀ@(-o}4*?|K?HB!fw} Y!iɪ(n%#bW[|'8bi3d-Jl>PitPd0AL+L*b\wɨ;8@,ԉbTKe!H*m/խG2D9Tq<]W5(FJ1vކŬC r:=[ۊaq}[1hp:-%QwR:0QC_Ʋ+y̖=0i9@WPYחqXC4~to5oiz\9TCd|g _`L!?[`ن"D$͐qӘQ cI3#!zj'= L.1"n<NWY6u.]=y.g_~/Q#xSbիF>\~r9bOs9ɓd9txZ.n`rX<]_]a"k%dqLy6G+Gv/,niVx4ȔzWxaA`//R[ի;A%O53SFEHeܓ4N01g$ [gs{'d Hb-A9YDQ?^6L9" S)K4)δX%ID`gb2,[T@jb|i))3]iӔ^nP0R+=X+ Tz@sOb8β C0~12-_QXWQ0_k&oi5q|\qG-q1~ tki2aR7VJ糅3q<VG%$DvB"$7la-3B)fS#(prcE_xj.F ѷdd*ly="'S? $-1=ɮXءcǂX[, nWaq9~wH\wƈj1U`dEc=wJv|q̼@Ug^bk9:hF&ELA3U _xsuT? K|VȞ@dE e+3ԳNI!cB> \r`sϪUPO|{/A-ރ@?⽏k*zƶh䑞Jd\ӻ]/iZxKnhΘNAxF!WY/<2\75hSV %=\44EΙ,o8'zt \ۍ_|,4 /+BQ{̓NOhw=a6wD5&=۽^F.w1)K)2W*JAQHtMjZ!5 ƴ~''n68%8wzֿ-НtNtPٽ !Cў=<=?EJEKm+'*nY.+`j\ow\KUP8})jtROF-JISEIg[9(CZ6ũnC7 P}h]nf8)qJBqO!lꂤC8$nIix;Jix6!:l~aƴZ=;xmZDm7Q2wC}$YCENe@}8˨Ќ6]^£BCT.wMyKJAp\ޛUC<JR}Z*cpX&LdCD,(A2V'DgZByB'd;_RBq<_=ح.+%$Z`J B(3猐#U0P jB I$yzEko1WSHwyNk1&K d9SS!yӤ ~̕JxӤ>E?Ds7^frK(z ϽOej 8U8PtQ;&Yi8 ab{A9VGLCunh7Ar93 Sm7s.f_53- pAsk{Xkq8FN]|Rzx:9`N1OB^\9 OjKkj%eʉj:*'%tXGo*K;;U R 8i5\HgMi%[ty @-Kh >^dJ`|.<%y>{y_bkIjF);\A%jKyք֍%q^} -lr_I!+܈_G' F"U91"[e h,UR ,}ppv3uݢcw%cW*d}Gh1K\*v Q\uP¸?}x~͊ IR/ߒϯ#l/p̀ "bcD_~y7S1/24x+B3I&&=5L0#+XD!Ӛ"5XJ|ǽ*/Au5o}C)+ Z*t9dQjxiH`Va23H<1e2-ix7o-,E!ƜmP8|pQ6J%^ces͞^tO͞q~4"oJEzE|4 #fIVȏ0})10[<*mh;v0Odg/cPwvzģokMte?} V 1 5KY2zvV@z=كeyqt!> ecťQc67QأҒJ (ad]9Cjvʛܬsj*_|s{Ba2?V^?8Zt4K֯^*Q~&T]`omEO+?KysM8.J&N2ic!}QHD#TfX3.̀3R ecND z.KOKܸ(m[Nz% tEIPl8 J SX}6[U Րkg؞Or].,(46뫴2]~3+{mfAWFx*R QEN?aPDܡϔ*/1Wr:3\{N3jKR8^ ˭NŝL$&52_f!+-AyHYR#FܰBL)->ylG4GG0T'LJP~gM:?}ZOOǕ!%=W!߾{UMwݷO_XTC|3G?O>=Ɠiό\t5~ }ʓ ceRl ӏ|ྐྵ )|?Xpj9[}qߠR|L 󏐛\ʅZ1v_Ne2/ٻq9KbNV*ٶymkeiQ|M l۶6ذOT  U! YtliJ݈첬>Z*gMH\Ii+1_~ t@t9ᥲLgEBR2RxȽ.k.\tiڃto9.#:fʞ=#o vNIO#댝:Ҙlh Rl{ɏ_=8St,?hw|D͍Jʍ +64LUm⮊ 틮''H&D|!_DOꝓ孯ty;_~+ jb%VG_= o@*XӫE~z+$/ǻ巕/qaѳ&ɻo7ctL-fȜ9r!+ "gBgRY 8g<.dfJoŜ5cvm6)Nn՜i\{ 6TYń*(x&c@XIƂ([ ByqߠKg)Z[MyZ_9n \To 2Xj XOXoC;le|7|O;Kp'!B;|jM믣>Mע蟩+5{њ 6mm'}:~w,ǫ\Ӄq4:eiQ˥2u.p&+MFbsP`XyӠ'HpTJBZ;@˰p97-6׍ZCQǧl/2}q0lh޾zk4Sgw4tܭ #ֲeio?3?z!'%*D*b|f&@2ÂHG*•>r޹"O˳ "ј=kǽ~Ǔ[f?5GLKF)U 9pfR[UEhgg9A)Nӷ[Y5dKmfHpjQ5-bTX8!vz}˺ <C`]G u)8+V .v%!I[4m:huژPA,Q kY.5G=9d,{%ح"-k( @TnG2E6Ӯ,2WZLai4%U(lBKm:(P~CS2Vt΅s  !U*pRfB[*YhTPyRCdw0ԑ-Z e ?˚@9KeːФ!g+X֗) gӤyQ* 9A(|pm4}>28QBi`(gnF r005F!9Fħ`޴ZӮiIv4!1IJUW6tF3NAPԍEПjBP=W04nE#wL4U e[VXX/,դ@hA\_T•}xhDCzѼx볻 :}l#J|"RZ@wl۞22F~AThV 4w2wK ?E4O~-oƱ}F:!PRo'ڮ'ҦRNv2+'m [rݶ}=a5?4>@7Rd$YQ_`[Q4a"b#/JuA4QYDś[я/oW,6E@>F  {1ُM HRzYRR's_ㄷ2i.cܵSEJ"sЀ PAX=XID#f1t^(4Amg Ǫ/OsvuxÇr_^Z3o7 f@ZWx]`ࢼ?hQ@;stMs[:[ .^g Ddk{ ~`M~jnF8` ZzyG0&jŗͺuT4ug-M&+J让 VVF̷l7ʅ/Fv4 ϥQ7ډcRdU"Z% trц:X:)''1 sol-n@uHLNjEdQ7D+6tZ B5#92r9`j֣U߭F.M|em6%0o/ 6j]X>$7*W|6[v6u~|-ed(%eNgKaɁw<].k<1ۛݷP ݌<:GH5%n mƬ0+鹅GS _ @:t%GNxӝ#1i /KM&rEo1Lp#NPyʂ)|ΊB)QzXW~%8f 4 ҁګ1@4ru -FTyQ>cݗS<ߟ"9/9ӆm;k3\*EZAn4]Ԃ r׬$4fZ9QwP oo)~e; Z݁*M! .Sm^0vt$:j]h>HӀv/ @=9Hrcja btu Nj\D=fF'af8Zd XD4Z ;NH#Um飷HkW!9DbNKޘ/KK/v<M&lAKz֖Vai`B)G'r +J!MYHmʵ z|9-IBN5a&"vڠo~ֆx똳I8l[47ZWZ&h8g UՃ?2o iHG2I^ qÃCQ$d'DfƼͪ=˨Ţ=): E1`%8[?2?,0deF:<-xϤZOCAN֎D9kD5O4Pst4X6J<%Jǒ)| # dڃtZxQܕ p1Bqg%$yjfRoQhl^0]mo9+F p{NV|f,vvn['U%gY?%ۭdD3fbɧ*#1LxŌR!h>XyaVj*$B\kp<+$,5i5u5yPHXjdrx˩:08a3Qnƶ@[L j"߀FF)̓1#}ڨ;қZXBr,8c oP &4H/ C^Pυf^P)X'H)/πh WJ2)C-$540`$bQRMKIRhP>1$HsAMQ{IF{YPlH,_ \Wt[B&-Xa11E+<`y0ל,0ʅPFIIOBkݥ@9gZ{dxt3K.5횿KrX۝Mfr2jX>. vސ yCy*"D} YxYtέy1&;;hw6 h RcԺɏD&fx^{c5 (CKozӼ2($n׊EUGunC?}~mAlhF ޯ+u!߹)x+uH6i6dJ`֭}fuBCsmSOܷn/?ukAFu3`֭MDօ|*XdhdUi5/UYKee~.U2T(F.I]'AgB5J]>I#%8_@-Ā˃1a?sAQM4ӵ4]I\k̕ 0uŴ8KOT9 >.!#م?mByLm-H5ZMu qdټ3|[+N@ ( #{x w7kIo^&»ЇW-Rָ~ޔք4n$v`!@y2O n(T+Sl(!{ta!#xR8B)Z KJ$)rnF[ߙ 28Z)@* >}T!G,DRf Jנ@PKTDĂ $l*!v^&^\` ǃ"STCfnj"TBB2[MLupFAf0}pX  {,YD%s&F}^;CFOM@/ռ]z@u: f"d9 fA(M <,UC2r]y!^:=1r.% #ƣ͖ 3V873HAA1)LEs1;'Һc0V.|4i]ą:~$All;1!ٷ0V#!s6f#p얔3c NBVR%=&c؂N FP.Aǁ=>9C1+| FSsL*E)^#fB } 4r9Ar '\aSǷDd>:'JX۞g?G  C  b`2İ"o@dݏsdBƜ:D xs<3ό4*j6Yk~'I(   hp+zq.1)Ay Gm_As5q{o׫tgdfKW[:IƯDocK%W4q亏š?ܦ`h7RQ>Kk^cͅ( R!.:]x?c>>_+o: 3eib/ϬܺMG)Oh,wwMK).x 1'j=W|u'Jk%N[I %V'N gP ^͝[.':S-k8z`V(G=gfyg0߱.`W'K$}\Շa)_ {0}XWO4mlk0__dʠK{}uwI.z6`G2ڻ' 1s9>A6v48GoB'1^"ս~!ڗؠG E7#5 eo5vx#MZ/ƘRT$ήbezqd{7/xS)Õmݕ\tzpF'%@3%ʑaOucYhYOt(ai6<4!KzӉms0q̂*~;xF֑ +#0&;+Nw^>u^QG%W*ޭpİ'(S7@{fY^vP ƼeJ icY`l ,^oƀ AuE$BMJh|vB QEgV_Vu궣k<8IF{t#UzUĘHR.g@D"z|*PN1\]' Yi͸SwPOdx|.D]q_0OևTןyz#Mx8Zk#&/D6/kD!Qٔ/3QzqUk 'n:(|ˆF4w  udJ[`'Xz X%_|zlIY5K3-Q~dpLD__]Vzoy6Z߇X/ێ#,T19-C Fj{?DH.H)u)rFg‚1@zw3"h!<{axTS^]!m`XX].~TNhNu=WץOGJXJهٻ^xN r`A狤Kb9bȒ`d\Us+=n~[zL|E|h7iZ`|v L\0Y3Ds9  ._LKq3&9 Q7Ӟj]LI^Vsz%E)꫒e=ȓw\F^B_ 2<Ɉ,yDէH.QO/p— u`yы"V-V$á,9Hk\p9 JHpٝ8:F97D9½ U6 }){Ar* \6yMbe(sz2|Q~&e'WL$O)fTMkzH=OR[vs:2~Odjv7t x;#8oPxmm(Mw/)~h+_v1;G ˍ?~"w?qpUiUG-jt3CT3  76DpmM6l`w3ppL l{!Gi'Tyo  VV" ^_x~*/ٿFz}$n0^_T]!X^w˯ú,\7cx >JR>% UH e+}kmzLbf{otRxi+qU.֋!`&NZrz:7XAcoL4&t(s-deLj|U]IHKqEu\klZ.MDtj'z\t5#e!:l1sIH qz/]18!&qDo`a4҂Ql g m!VZb%Iap 3Jno)C\_0|be|LFb^STI-k]7k`JZPZ[0PPO|^ AuvP߂"TqayD i*(>xqM'T,7k^xtӇl^5*W|7?\o?}eRvU^c9X(ժijQ_Z?fvuv~*f\}iתg6 oToejtdLT@ϛcATˉMCH=F7$cF zɇ^z#~lc#=keȥ#G ʊ;cXJ&r g$T?Xxifޯ[v05`t1$>gL?EC1WfR\!5r)&f%#sL@/2j&#sl: u~KP7~~Dy󦪟q#̼BH iypQ;c \cA>C~8Qqtwл)6+e-#vsX 5 ZTR@*v0b!Gؘ }a0H ^x:UI;+U=n g]h"tdĝfk%K #Ī44-K]8.w9ɹ˛۬4ӸzMjF'Yňrtd9f#ê7a,PB-,;aY U&ݸr[ojqa]jVrh)N'4̖V텞9rzbg_#)Ma_|Ϫ^xuעLH*|fxP{@ u댧R 3"g1gٻq,WT5;#Ga+{S\|e+ɝN_PeJDP f;cC9Łlt~[ s1Z,st/[To“\op1 %\=[M5Yΰ! h&\t3ЃP tNA-D1FnF-D v@@SȚ C ABL[i `I<>)D߽"w4p7K!lrF,8`oD!qŒ WJx쬣G'ngL`Ioq:ztڑwj%)vM68帶Rޯi*!魨5PʊN߿VR4$SxC!$lK"pI@a` We }\udVcM8IL̽ƞ&^&(8 :мyS4Q'Ǭ9qSKFA "X[9X2)ōOY;gڌNY3CB)L⥒/`t]CKJAg9q -/{NNMeNmNko]BZ8JS m4geNX2DT93ۺ:g8q `Lk;'uYۃjzQOhW})nq#g^`MV3"=wƼyE8V*U;ZH b!H@2&0bcAx:Lei:o Ǵt^cl+AOV&¨F`rE/rqcRF=_s-Ǒe2JG|Ż}qEnj>E_]\OG::Zq9kr!+wۧ{^.z\E1z~Z\ïƇ͘tGwwd>w"љwл1BSpL EaNHp@J%~1M$ZSPC<^- a5,RXI36cJӤNȒ4MF$.{F\4abL7*[T5%:q;tD}`뫺GO@hğ; ˶Mr GјA2r L/&,Jun|,K7Q0" z̔z]thcf5zD=e6ף3rTsO0Ƈ5{&yFnfg;z!vhgI;*R1iۅde)>T{mWAsY#"uUI3;'BȜ ҥ۲PH2R_B!!b:|1Qo76C1!nsJ0.)y;l31u\ }.#FL11ҷ?یڭâۈ'f3gov G,=&<&kd̥3JOov9e0uh!+ڌʀ ?3$*؉W 愁!4A_p醣K,!|uNPvOy9; Ψ'Hi!c?"`% TD*BK"}*E8J09HJWA%iJ9-*'84@t)[4;~JxH4켜p*$7j8Lq@>t$0Ҽ$ñR*8aI@?n 0 g TQ<6ҟѳ}Ё \BkHt&sf\H#'YɄ2 J@̴91tA qO5XA\F=uyS%a^{]!Tq&H0LaŒhg#<3BG' Y˳hV㇧iySg|Jpic%$S|4Ig6UR =ݯy,xtscm@,@(B *F)BI*_.|S*nAg:QQ$D  } MYXrTI2"${D#Ji:<.4ag+P˵2G/ɝ6|3N#ݵ;VRn5E1JTy,9.ڥrg#?z>-hz|Z]k5vqZ=O|'w*r>Vh: d˽reSy6+eYv U= u f5%%`6HxS)ѰTBY7ڃ.!3.SW_nO ؛BK;9 ۍ47Y0uG!C5;rT,<}n0B->u<} xP"S1/H.u}b>E]z]s4oLSQ rY!B^{zl{(hym._\'_UU5VjQ<ˣJ v׀ †_>fdyt{?+~wߣni)݂sQv߁>(DZ.̧_)X[ڦJ~&ȃ< :w{|n租mi#h寺uDVQ)5u'ϺgJcu>A؇p{̚S8M0Og?ښ f:JnuY6[TZec{Q#(&کMB-UDŽ|^n߬BR?ZE}^ nSN!aInn*̜B!Z"Z08ZGSj)4Unii~r O5KNq"8y(tq E-SRɂb䥝s,&  ӣ0*4Md5 7&1,Ϸm$0Y[ZE( "s>c$&KBj;=1 䃟V;!u%G'LYO/Ĭp î 1ԾLdŴE/xYaJ6+Y},aO(J'|)6r_>vu1N|<:DuPpnޯ3_5q:nKt4կі=ң&n%$dtɎ,{`cƮ}v<DŽs 8:zݔ1;OdA:|18$ň+,=+TҮ:,YUj,^גuVHpVJH%IяuV s8'C^5$w9o$nt#ڥkضc.'ՐgHѐ=# q:k+,a.!> :憌tAuHo10t sDv sFg`IH24fᶙ/;Fs iN֐i̸pr.#' v@f=vi";l4Iڎن*vPk<4DN0d2#Y!” ,"P(XDH%"%)G vpIrAI xGp@<=:jAOݓOq^MEfz>qϭ/"k@C|/J xP_}Pڶǫ K6p\;F}}]bpYs۫ [8^3grύK2|ۯOnXa[ fiV˧GWT zi=Z%wtc՗|sK8 C3ziP1WFZiGV{՝~뱸b_\ljS4]Z6v er΂f :EtX^Y~=HK+՝oV#{qpUи]xC[MmGWŒz40$%.nb#8՚zAN?A&tB5aG]aw &]:&7"tplLx@6ܠFw @63gɼo4Q+ saA#bN=#R+y:˩g7qzF´/t drֹe/hHuQvnYwψøbK fY}#Kިt;LJ{F>5i˨Mh;WE]OQ-cXM-,<澺3Z\^; Fl4ݘ/].D?ΪLzXV@czVvE DTv&G6ԭ22,=?9a+O2lЅQK K0'$i[Nt ,#9l&=3$ N!k9Z3@6Q%7C9(7if $NR_URwYMFg=}2<}:/I WKiqߊStn!i+йBzvx!@s&8嶕 ?ܶm3) lQLeuLVc kT=S ̝'ݏRxаs ,W%)ec͛Ys;؇c֜m x:=`tO5m$Ҏ[ݺsV8EVXACꕶjRCOnɂU)$h&坪B5}lǝ!D?ZUu`.AA;'[:vI]#aKèL``{0<: t[_' [r8VB;I75Nb҄R\tWh` J|z_| E(10ID~OsCDqЈ3PQ91Y9@2 H~kYM{,!3#DRI#84qLE(aR(q4Ij`Vzպ^wm_aeunn`ey @Tq@Q)$CX&Q2(,(Q*4z"?ZВF죸ߝȝ\$5e޽kԓy:` TbxL^JiȄʈ@jHۡBV< 䔽jX AR1 / N F+,H*LM"A$eqtB}G*.G#}=eό|2>3Yx -hQ|tk9 H31RLV\Q IFy= %Aҭ83scIgw6@<|׭̷ #4T4]>sr HW3}> iY<)9睨_?w<6lxow3ۺńD +%YSmSn|:82; !B2U] քR03AMGOYʝݞa2]@EM y._7g/IdlX `L|Ol{s8UFǿTIFVᐺ%àU`;`u?0ndwŋxz~\U:%uq. L TĆqı \9tJEΩgۜ P+Fa1rWl.†3zaB9w 'JYvݩ0N>:'S$%G9-kqB4fQFxq}GzQfc0 ITl`mJ4!V:\ FcMpL  A<yIc{Ϭ4 TuYz=J TJ)sNh !@jq-/~1IrSB)^0s^}o dTqS00M+Ԩ8O^L8j&f#BB)m7ױCl{ -?ÖdO8>&$'+ө˵f%st Ds5O")\KPj-yRja"RkLcdp2BBY|SP .+a8ljUMCK9 Q$)$d"*Q<$S%c#QC5#j /m~j.\m5Fr]aAW-}Z=q4TSy_짥Νk -%OK3vB_5j)e~ZJK%/=E.X5 Z ԫg6n@Ᏻp͜u ,%M̩ uY[!sz_GI6<s95 B_wKU-]aD~+Ljv'>ݕD^n/pU@ܟjNToPUHAKz^п"؅pa|MM|I/-T -]Akj] P׸ 6wW<}#ۉ^2?\s[A~ME^5E^[PqZ$SޤLsv\rkEN!E' q3w1Bg`KPЂ)$G!8{b:^X:m/ڱW%(l9E.]K+36x\yd5˫%`nmi/H8NUy?}-OMX9HdMr A5Cc8\Ȇ­HEērwah+TF?wanކscE,$X8e)PeW2 n3Yorz/_+u?ّr,0uyMB3 "S7M:[9aK8>xd6Lۼz6#?/Օk'q0)h-Xid$,N;w>w:&mlF0@z 45*oLO`]\٦7 &# q[ *DIK-ԭM>Bx M>5G=^@L6 `%WzЩKϩc%]0iqfah0Yܽ=5/Ct۬PZX*q ׄu)jwhsxKu]ڮ 9EٻQsAj)Y|>RZP:{#R6ԋżP亵P- 5g|}pc.ӻnԀQ^w*ESY'Z퀪pR׵:PD׉uܣUŎ:VU.y-CԁNp_2}[ lk^nؒA^c%) xخ QZ@Mf FCo3Œ_Ĺﳲ/g|u}#Kc[,HȀSf>>кB?Ƣcl*şS@A|eM O`vpr 0۝7li}U*L #ePytM#cz o?~Ńz$Pl3Ɨ*)ƒh؉rnM泗ة &(R&>+l)8KC_ #\DT{m\ӓRK!ӚIDM42q@F Eci#x9J}.c>s5칛Z^1 # \vEH!2zaD^ : \w~+%:"JlK") cjtRVNqc!HsEUeHb]=SH&x4Qg3G˜z?)Kf0a]×~V7o) |ڌ bǾ O?&CZu `=T3-%x{Wg&W$63e $f8dTFBy!pXRp]U;nNP|^zjV= %b&x{sT4g-08!QB eT'ICJ#&* N1_+v )G2@<,GN]9iW#J RQV>!Dd(1cjio2>e?n5]+E Ûի= }ջw d87 bt2n03*V1Ni$?**b(u(b'ʀ 3Y*}- G&[t6.6G3c|:UFe^,Yx,̧_>[<`)V9 >~,ɾ}l)L{N >~1p6_  8 LpTiLknDzE*?8~t%yʖ I۶喔N(Y$JԬJp..s;12khx2CBlǯ]ְQ PN10/4y7_HB$BCg"8_t6hGUo6hݭkb,QV]Mjdvy=޼ubzcwn@ Y 5 fLL'2OQT%9ʃUA{6\޳܆J2'bQ l,D陛 ){dG)i hҬ`h{l~qܝS 4/uPL]cwN|]6ݓH. u"5wy5b<_VEQM/R#ϗL:%ϩ\6~T67qNסsbѧy(E Ζ,veknROaE`Im8p g !ÆHa֊,FU ̰p%hV<",b@4 0Naڻ.y7t܂JwpH$0bkώ ܐ"l 'be~eqb!ďMB֨mRG8ް!/,U驐qL!>ݻwo;%fvRRU~\`#iGZp089ImKz?74^zN5:5ъuQU֤p.:Y8$ 46fj-e~Ԍ}7)S*1#B)< |y*!3D~3" U$Ӝ Jr3$JqPxR)Rhi۹ao &ξWm;O0$b!__wSԤB*w_jVM43l :W]=*:=ZUvՌ Ue S̑Q6Q)ulu4)C7<1 Ř A2 rq"B ,ȼb^BquN>26va7灧MoCS%,DK7Rob~֝lJrϵuqW{E*jy6R RIObokA݅[lY#R0C@& OR' i 8Cx> /o%$")R$dJ$+1"3\~uȰdL"a|CʠD%A`B^ Wi 5K4|ǵpR}J]8j[vqʵ̺b&]hK8VSl<5- @PhRK{(\?4Kۿ cIAxJWI68^d^$}u7?ݸhwK% too_/0 h4}.>,t١EGw2&{ 9w==ݽ_Ouί.:k (ĥ]Rm㨡F:P#Ќ@+LG9 q׊}] гiwsj}.f[TkfIS axCҪ3  ߎc~LA,;!u{q_CӊN(:{i|}^m&p^_'٩vN;v<*wXtdV7A"Q3(Kv4(K-Bp\P7j1W8&Cp/O xq'LhGaηVOju3kĨ?P͖M"%xB%ێP-r$M0e=Hh 'n,BS[VS!b@ճ騐0BgTx1ŏ tn|LHv {ˏ*ɾ] OQ\_N6s[|ocBN+h/̞fgvZLg " K$IM#))iN%Nsioe'( Z~CEآ@ ٭y柣ů//\W /Rep2~2` boގ)h#>Xu`{AD{%q]ltoA2EZ%̮ufh̕(T q 9 )!fAI] )KadOk|h>rSHL HNHJDMdh`S-ktҭ2/=^yG.@Z@<w닇] EzD'\ q ) |hZ%GY1ܪylÙƇ] aN&<=m-;cfb{q,tCoHo y!#t9`1㞼`'_@( ~8L1$P4lfc05(+hƦl8 @"p' %}pHD}nF2|S]ktM=1J F' K:c$H$0%Y֐3Q4MJ&zO=;ɧxRMpJlM: Q1ƶ,6ߝ""9QK%s#@n 'I(Re>&g)hd 85xX@=-nwX)3G)qvѠ2zQ2p#p=]ͯoٗ3u/{U%U4y"%W|dA9Me ̓$ THP>RAި 鈥J@AIy|GP#Ei;+Ý,B` (E.Ab[<ZīPߋݿ>Έۻ=#6ui \>>.0xz 9qxo6D<]f%б}ɴC9(rAz/AhX{O9^຤G[q# %D(6C҃QXj1 70/2`[CQӊn<6>qy< ?@{Z y{^ :Pԗ5t^;: ˢ(L]/`SP7~{)ўVyTr[']0 gt6[6>Nm; }ḫ=n\xSh)A `Ӛ v%8yArnfsJPF!IINt=]ڔ6_.?} F81L)ݤĄF~irK oûwaCkhIJz6c1ْ!qteVvI|ouU7#oA^W-Xq}S}ʞ,+,KSffDV2ʁi ~VdZ>3; v(,)z!m1gY,!zA n|aQ,No 'Mē#ɗ}k=yYB&uԽawD=NK+| b|kAVY 9e;^m4II[}-K42iLTJԄ' γLKO|Fg~рSϓ'I<娡}K'qRM\qqM\6w֚8OD&:=?JTN S;}sHg{^ΒBQ3_Y<٬- lW8O6%Խ6ʫ:տ6(_u$Rll> n>?0'cAZnP'xO =[5c6l,FeJX\Ќp%5 N°$IL#6O~(iQ±TY};yU3)Gx"0cG c +P@ag(8St( S~+\Əކ(ۛ' y?7WdIPa?5;!s%e(2VP&ga7<8OD(/D?Q> 0mJ?Z!iGyέg\ Y}5! C 3Œ+IUȉdOO:)aYt8 X&S5O@JUnl49RrUB3* D‰1r~[Agl,FZ@Ԋ8m(ay3Kkp:BR9m :b t8[N9qcPLbU.IJ5DBuSE\:Z5(G$9ҝAP8I@QVlQx W{bm9ڍKb\$k?8'*E3r)V8W1 5~K&r hnzW{NӍn-گ/V;Yy+KJ8F ZYr(Zl>]^_yMf߿G߽~\z`Qf >\@l~W،bfgB5L|3O3j|l}SK}8V@F ໃ;&>Qns0:<5QxIs\h丠r % սWM,bՕ UfwmYnO!-.нC#跛tKeũ[EN*ϻwwC`]JJFh[m'xr92WN rL;XTvKC173{Kfߞkby_kmXEK6r/!kN082 YJ\bVLݛ&AfFjUUWWWWU/,'9j{ϥc8:nv<A_~3<*2NoPcBn}qTǐQ5q(hBŠA#0%+Z+1 иrBCp}3tƍ-AAE+pjŻhT0>j緷aHTc oƁMF JB{h!88J*u#wvvߏڱ9q.pY#(c.aKGvX Ƭ߽b:4kg-Tcwmi~'&a,]!\G*DTI!^ţHMZWhQxI) kJ ^EB\WPYei%h,V GԴ"u߫;ϯF DqlIW94MR"]H%ǰ-҇``bVYunD\FRcI$eQBCmRJ *)4I`,&6`DjjL jQ<"{alxͿ[;5|{Vi\8[? `0I1bl ijqC=reVV<l7w"\#$a8bTTq0p 8T")F:}jg|t᥯RŤRNϝN$Vy LD+,Z8S/<0 Bqĩ5upSVWTRYhc9#fx_'*K_ +Hxo nV! d ob0~@{A<|{{<ގANaxeLEtc ^s˱dF?-L~ab:/pdwF1RE1ۦ֞ UGJZM/w ,†,@[l·Gl06ΧَOXZŎB *潖d"s"SZ15>_՘SuCN*M=a*,"{9Yv7{O~^WO8"H,qE &ݱjs<܏ڿiV :M] eh8Zz'n›kghx@I6OG[2'[ѵY||}[^P3djIc~aS+Z7s_j"THgc5ét&|j41ȩgVC؇%zQҾaif@Iߴ!c\P^MO$ ֟rֈ@byN#,:Rm o|N %RaƺH9FhY-@'*Ƕ\Ҳp6P_\g7z:y祍.I1 1KSp,0#H!#ۢq(L9#6eDׂ'#RBIRV!PTJ FsHSpŌ6"XsLRMh]B."`ev}l_V]`x:ߙ\Y:il4˭KmxH9 ~ZY rŽ O8jKLT!A SLS.)F(U) cEXFr۞;&0*%bPʒR\b9ye Rglnh—Ͼ3zq鍋϶};q䝌\[d8D#EB8TDLpX Q^TI;NԩH%ZuJϜާoYB `}z#+0g8/G|?T <#U";€yɾc|o?"#%C+b·AwLya)ߟSo ܢ _#E|^M"{_K9|\b9IF'.pMos7-|Ĵ.kQ2zJ%iHca"k/E1%8Jyϊ]w%X` kl[rA&MGp+%_o$J5cQ XƁE ;S xYV/? V9E>7{>(i!EH+5'*F=?YhwGo{,^ҢpMDupowMK)˴w=D&ո8z-}ZE (E4{{,'Qj4buH -ozWJr0; ٙ qzFח?ޯ"rN7V,EAc-+ . \"WMvRܟ@]\E$nRtQzkm[:(h?AZ~o'cۙNJ_v0[1-|$ݔ>4R { g_]pA f9^[ RΎ1\n ~N7IAhat;_A_~3E5KvVM*- +Խ,hM8_6G, l m@HhKn5%8hTEwa\tQs\N,mx2y 4@$88J*5#]λ^G5r w"ճl')DCqM$x4(aFʉ5Y[K*&Eh{u'ބ5JtPr$SR@A]mB:BB7oNF|JjH(ӤuWXe *ɓ^]6~]*ȕd0J7v+sjFZgU<<,ʤU]ȫt4oL1MV4^-p:Fڤ±hHM6Z*FLUXv֖q*ϗmSX+^A:>%)̧f٘(2g(oa /~ ߁Ve◉,}0Y:ގA > wP~7Y%UŸ>LT'1W 2@8 2"%"bjQ(1ryx|(}unLd {ȪG9vLgjl4 꺮#a.8@zEn2RIɣ(f$֊'iLXBĩ&< UQqD #Nqu w!2&¾u}ss&2!h(c:$+Fzl^ HB7ooH0k)*ǠJ;uI)0L9 QPDĹ")TH:JaO8 E" !c;eqѠq$HlCbhldO(jx;Hb"kI $0wIY`-ߦ*gVQG!ґ$S1-tgEipcͤem--ZaEss5{>zfN]s2jf/VK,ei(6H+!:l,KQ /;aX,>R*sjiQC.yUmk\}򸵵(AJZ`H)D܊F4?VNl<=fw߭eYdk1sH ,Iy2$FET玣 t]p|z)Gmr7Z" {d~0K|w&,WM(ɀƅkjJ0` $dG,p .ssؼ!Pc#PRIS}I5\$o8pt\-~eXJ}252∅ KN@!iA5oäFE$)sA5nTR%9w[TuX)![Oj/,E( @uҐ`R8Þcb$L4|ϖbj$R T;CԖP@5C\eS0#5FgMeW. d5rqk6gT#aTL5fP1|„(;&%Hg*Ď">ssb}2Zʩ`~3TiԐ櫹k.Ίѽo MqL!t 2 D"`=#'+Jpnjmʎ 6J8J5 ؛@0% PVt& 9jes6k=kbYCq'} vi{VyG6.wyӌlw5BqCV٢kү-a\lH~mB]kܯPqjAY'ND3j.k9e;ѣyׯyDNW$ywZCDn[-2o9Wh \N4+E:D-aMGk#mo{}Iؖcᲀd&2|qOpRf">knằhg $:`, u8IIuJRB[JmoEJH6_ w dt,fdo"{`qg /͒Iohcup;q0.KfcS 8$")#F0*S M.Y^rq\=7dsyIV2c~z4I΍'ejAnJ+G)㜅̪a*@o.Q}rL]~U˦~#VMcmiX@eH6P4$u q°;k*$kFYZ&nɺEq]ߵ(LQd! e*NL2K85<+Pd[\X%:!Cr)*.aĠG dMNq)HX$Hy_dU(bhtQM^MksGiV+"P"C "Tgm~o Zd٬30 !匿k&r=dM!ĸJCqVraAQ`Ub#h5 V@qy1Ɠz#XRK5ԦWFf#W Qt> vԇ "67-T c\o"ӊoDm'GD6!Ya5~bB=l! v!Vɫ_J%Dž-RQ+zC}6rU>\ң]/huD  Gr◝UG50F/FީV"s͜}V7eE55j_ӟikz5awM] $( \KSZ>O+w2‡b:[lR`G9r12 &8~|,gİ" `EXE;Qw`Rؽ`X>9T!Q^G h<tDVr*Hy*B4Zh9Z()J\V`M]4eO8G,+b_%o}rWu<ɓA:qyԅ)ieGÌ3PRpyXӎ^-xP\L$KM⁸ׅyE}R(ϻW83-jQE4:Gf1a4:-y?*TzZ^GgER"L}yA׫,3ZnXYІ+ Dnmu=Y}e,9H$"t9u>{27}}hofmJw!Oo)5@84x$v?R.*SǓ?d٧ˀ%pjee7AO|;R"AhAu/YTmc~%)EZT9&2 θ`x.*Ƅ˂YasI3("ą>V0xWSRi™ w'qC v5Ws5$fƼa70=U FxVQmq(QjX\ a})S +Yڷ'?5qŐY#XQgr!Z<5{M  [(;`&Fnc,#AfNS%Ÿƛ;(a)9)B KL<8P/^@ԙQlF&c.yc B+GIAD*";@1TTS Jm  @uA*A)-g;eCi* -{w f̀lCDƊ%i۴{3]$˅'t Pszq9[L~0 %E(9I K"bjt ". R9O] FTxV9GDt6 א,(0p "FX^'-If=סRGU/ZACPaH'|/ou8|OR҄BaO&vPN`Mv:QE"Vc/{r~!]BF0db}XO|RANMRNei0,ͤg6|.0}ά;eOrC;T~mhzz\__ |{tTNi4Hʣ>Y]+*4xQ }O8hof^iEO-HoҥM2d\wE|' *p9ӄ"oD<̙6C1taU3ĦQPХ[6FdSɋ3V}>ѻݛQfm:Nݪx\hE!Ԇp'Oľ%EΏQ>-G;;N=r~ZS3m"g\cY&t3" ^3:Υ(OR;-w mx0f{lg&XFx8 M/8ԅ}4w 4!GߜƫLM l=PzAh 7[0Qz{%U)aR ¹WBJSk{JȰQFI%ITiEa 3{jQ&|@n,k ??׫~T-cT$NXv VBAh Em5Tm:?7jT$Dn.Z ?xvuH2;fgv8np}"(5D׹?__`QjkBY'\41Y?FhP%O=[%-u%; Dwtb0c-' l† +ά;&DIE# ! mS+,3$zgj}KGj|[uiYrv)8|͇2W{ɘo֦qDyH=MY%f!ڀC<|7OP~㖻WeR=no{s[cd#U1(y[W4$4˲,?{Fd/ ܖ~b3 {aI-k$9-$ꇤĎZlVUEN:#9řvj8FK%d)Mk T3&EVP BE!C5df >"XfN!eRa$)ɔ09)n>5 ړR@a:?K;o8=LƷ-$07\2˹!]ze4Rg9r%B1-\,޽i$uz023"KS Q ԆS˰l0DL@icm$ìu# 0PDǕnOǁK"4sq :e4l5,2?{0v/X_ xZ\%̅`ƑW@0 Fwo?|3N7{f}|/"فJG@I[l7)cuˀup>dwYRJyL#0Gse+J#IQM ogF'{t3- '+1|6ͩA؅ jJT;N)89E8;Bկ F8z)C!x7[x)2)HɄ6"Ռd [&ؗ_e(uuur-!LyRέY2œ(UI2l:%`HuJ9P>ׁfd_&&PC0ާA}ӽ%A+/\0XO_w2nOjB{ E| ؎(`qj)8??%_\!kxdZUWU½pNp t6wKNky@1[Zg׶0 nB9f=yF"x쿽{~,PI$IđEZyJ4c[eׄ*TD#y$X@h+& 'y-&q$)V9\L5%tymLh V% ;40"9LT.hTn_Ei]K@:5"ho 5L Xqeh{q qt(Xv;O,Xh\-ю[')婤%JIQjP w)5vy:C87prV^rvphy4o:xba|y]> hChu%3Ŧή{Aۡ!h= 6tڄ0[;uv?^GXcۋ=l|MѺl7l4Y&hFXU*W 0`*9]g#t_9Ql>/{ %GO wltẩH;OxV4|w%H;ڹ/u;^<7Vɡsk>_z֌M8Bާ/8=  H)v~z-|n/'Mn&)z8o=<ƏW÷,!tarq߀!帟sG0KJJ$Q49\Ų'iJI ;Ќ5%)zDPv?_$Y6% .rL- yuDSv^"5)M= <kkLSNs-uDÿa gl+2rUwff\XBJk#L{`R0¬\ $u$( ďE0D"op&1"l@-Oe:B5mִWRag8kHLĀ)$U2U4{Lf ~@zub2L\\TќK!ll_zw%%j^a]~ӭ˜+*t@/(o@!T 1vĵºTx<[:[u%c]4_ӭΰ6lX9մΩm =4sXù'[ȚX"`_m>\1݅-a Wn-gL.fcG^JDhws w %nNf++䦝HtM.ºn = }o[2T]GOb ~BI][;KS~H7%U=wXxѼy*'Aֺ[3U{ܔ]-49{S"_$/#ABju9O1 VO z!)<L(GhGOԥ}lӝg'[+X4XǮO۸<>iޝGԧPEb倣xWHRL7C^ˎzXF1Jaqql>׺@'[ | Vc J G}gZ3MpFEÆfSN+BʤFxZR +ؔxX9ER>\q[%:+t_ITW8dt``OL |?̞ȅ%dD=``v)BnC 1I,T--̈,M7D)4N-Jx QZ*4ÈY?P1.BaY0o==LƷŃ,L}ٿj&wb`k|Ż8@FZSRa*i"?bI K5^k!g[.<%\bp꯮d:'|z%9˸urF,Rj0v:Q 3'Fe41D1mRg ')eYL3L`'E3:ej508!kQ' 5 Pk ⁊PA%G1D$UcÍZ`- k`a.RJ 岁j`[8[ dQ"%iBPrqk:)u1"sbwmhfЦq8|xK= PuwC2`ޝ~Z,E1#/}$/Z8jO(;Y(N"sff}tKj68,JO㨋6.5ZY: "Tj,;Jdqiz(MA8+ӿ(|>|Xr&c>U+6oP/QyL@_4l4el' a_q~uq/ZC&y쨑[1&ȦL 3 O=&p0D bU%; ϧ7o9;b/zb81`yH=g _f;{l^m8 ]<йFR#ح5;j)-[GW Ml`!@:*ПJSX(-9R8cxʙFBd'KF3b!3AgX8+R4?Y~ͼs\re0VI ۰] † "3o)|JRqnئC$xDAY"N`tv&TE(~H7Z I3 ,#%b OE^,!V#vzUTm;bu[.'`4Fn'͚y3$x/i.:C)Z|Wa6``-Z|DR :`di N(vHr(?X欔`͸F7ʺWp;*>2ŵ]ހՑ.k|<4j:TlYI:|-o-ؐiE7QJe28]M+Y;2bNdUZ`*)Ռ kCыU57X}Zj:qP{+#c|lMGЏJړ{I:jxj` GwZ&r梐JAqC5D'G=Es;ny0ں<;P4a>L9X*0@^0Mi: 'І7YWOPZe++Otd#SgflλI3ɔZn㕢ҾW]1z^,Jk{`ֱ6%y{(Q#c|l9`V3傏;jo& E}墱 ~ʑ{}gQȉIT<5mGfli>֋k`>ОCЂ|:s&5*]+14_0ZuWLB}p_Gw'pų ٧W'9~Ugkleq!mEX-[uDuTRz9W2ґwο|n Vz< peէ6 :ف8 D)b3gƌ@fXQ#W }JMakg(XJU}HQ(P6@*gB*'sz4Y K5Ϙ_G\feS$HqM`IJd9rLbB+UږSMbȑ3ilD( yD@")'ypl*YAD3 ;"Ak=> Agfi2&DS3FO}A|Ł[)p̑W3K4 f| W)?)^sG4WgCm3 8,P("41KM[c2{(J׬An# "w# ,;CEe&D9&V<5ZpK9BL?BA~V#3:XZ:뙗6 ^^aHE)PdГ >9@/tVRIP$ ݑH)hw*x@>NBF{B`R֍578*SfeI ERGf)\u4JfwMX d $,4TmuKKM2@u/rPHbt-]WKX0%ye^%0|QŻŒcYژzlsV,#]#|j KF.9#4=!I:PJ%ܓ!%ǂ.zafI5>H\0G/xM 27̢~!lx<& 6ڀFQ-xPRvʈ@$/s1Hva.lAyqGL)kr",, kaEԊ!X`R;p$ `J@Ɣ!"%W0NCF+a\j3eިiQ\ĝb)i!lYiboV٩--L*֋.w)Z!.@b/rI+{Yw%,(ZP ?بfii{6l$,,r6_E+!3D2$%Q% lѵR gvT1 @RQ9ղx?1zvA./. >>/WJCJ}ie1~Xa8k`'KYEnK)_4脜G\l疡S19Mrr꫓!֚Ѕ/1Mc$8 H&dBg=K4I p\fͯ$ۘ!=iZͺHʉ J=#S b2 >>JAi>^ȾnS=K}cH%a9L>Y$釷Hnn_|aaZcZmnI ܺgOZ(i*oI/Ő1rb$JXm|uO|?ƨ=I =JƩ-K\l<܄˫,/mc1`ۊVT{uh92֯ l=`ִ_EO `.P M ^#@ ҁDAD<R̳9Vphgs1oͬJdyPi@R ` 0kSf{8 ?D'ί<邞ߗׂ~ԈϨQ$j^܂{ S `pݜʛ-wfo~__~*L)1OdmJ[wfݔ7SZ 7wn񸱎au1dUq1?;"^ po}s%J8GnT<ckj&I(@4YWrׯ/~э*5S*MOMN{b67v&h;aMU\L,L̬>IzF%mO?e0mMӒ;:2]P{hש뭻@z7$Elߛ)a1qͣ~o9<}3OBx%"p9!4^G!bd2aL'Ax 1(JS0Z?wt3VDsmt:d }-y +)Vȁ+x\&2E  g%"k8< !8ZQcr"9ʳ,K!Vġq[*5kߪ qxDb(_BQc|S\w@MHх5 }V\\HFBkѓI sѻ8D6`Ո`._oWKRҰUI*ħ:ҧ|1Ng'SZK?/ _kQӢcKS' Mr]5OI%p ~8OoIl.>4/t,nHxn?:y~q~_ 5U^܍x[x[ڶ*wՂjx[/h ݂jnD/Bˍi >VlӺĮV(e[5K+&6o!d3nvvuE?;i6/+}wH}hkwhȿўS%c9e3nfY˿PUm1t_jw'Jm'pG B۱%4o}LmhO{C p_uؿcRœ \fقY4B }j"-CDvι'u ~6L?{OHn_%H=|x6Yds|`󰍑-$l-n]6[:bU.ֱԭPRo]ޘu&45wI\ۭBRyt}.6iuʣ~j܅c#Bͩ<-Su[#rدs_oϜh( qq&I;T{e2Lш=q'uS0E#Vջ筳:1)$sҽϻk]%䖶ck[?Ri [#rzκ/l͞ǔ u/J0<8yc)QNripCxv b([W2JOKJ4(U3 `,J')ƳS0L(m&5+4ӣȄZA(ũHQ;4p(*akr h؀"Y`>Y} NE>~JNÇUG8-ut4.\>XDUuǓBM.Wl62N*47mqߨQ,ЕdZP*- {#"#Az}κM F1:陳A0LC;/%x%P콽M)LWvpy?ߧRU]S'[ۢ|#iA}dͫĚW5k^Ys2p80)c,Z.$ Zy(b9P{J";wYAP3rn7 _wӛַ|;2P `:˯%QY(U>@,z_.% PzPZ/KcQd/,5C#;4Ћ,ܽmCjۛwqJ@A*(Ke|0RM "R碓BEhӫTJM 12cVd3G!RL>fZj]bjODEyʴ:I (AT c6%\RB? l4J t˫1b5}r[J&&^_:D5ZEGb0}¨D*׺Jf=% E m()R-$K%9)Q T_sлvQ%2t9)eLڴz^Mŏt-k`h 6~w{p1;&6AFId|ϴsP)\S|\0ktV5:JCQhGMNiCoj}lY]r4_dO,yp`zN3m΀jLOJ2 %0ݜ 3dp(;\jq~`,mVH `-вP:XaMqs}q*&ڈZZ$^˜ݑEFhF+:#wMEVV?nSΘ{ӋvqV5]ڜ6dgASǰ z?sZϬ2!vqFzO>yޡF0&ڎkJ^)7l1hBwPELmyQƕ,j1T@i!n/DjA,8̭>d53穙"#g'.mΰb{cnscIR{};AR{J4oeA.LgdхxB/"d/fN5j 9*B[Ry ]g6á8v]jfIuEo0dE:^fܒ_ lDA ǿXVrKζZe+PJ c?28ab޵z{ |/VϦ8xn11<]ZäDEHCimJoyʧ?+ !}̘K‹gϣ&1PWF@{huH9{CqmS5ߊtK͜BAܣt;5hj[r\|,چ8W[ҭ-rXM!MБ)ݺoEygJsHY4tơ7|/7t:zwyIA YQjMk?Q1mʴ~0Ibr迾od3z)\ݟ=&e/6ңę,ҮxNJRHW^j4ȕ+ρ[< 3)ihSFSu(K4mIJ/N"RےhH"TNCl@B/4@ۏu? V1\`[H(8}B(#S/^ 7jqШ;V}F41g U$ ˎ{L ?|#H6=\5j`/B^LC@ю^K_8Dm0Y.z2}ﻟ3 y;IEJwU282_[Dؑ1i!I8^Ͻhu1!(աw+2kBic]ƃ0ihHkZa+b爪eF 2XxEy !,b9MT\X.YEԊrxPH{N@wJuNiIH71%"L8 b=ަBiAW€XcoƖXIM rs½'D,=獩jߚ&lW[ϖwsjh{jz6RF]iM݂.]kݖF\twhl%oA59-KpvG5֜JJrR3F̎=&DH}c˴@K}Vn @̊lТO )!QقDz,lUA4R3i-Y]1j䋌ݒLZwBʏSgͅ6j:Dza^w܉F)MX'}V* ;bsX\+#Q{ޖFj2&ؘ} 6F#$%JAKJ&8pXv[9![#0"L$V:GC2DEN F(%婵hXĖ~GZ,i ٮ>EZ,a\.&B1zl~]Ƨ/J_8~ ]._R}ӃQ!Tx8xxmaC p^o7FoP*"HT^EVp@^Tx{-exʽ9ՌI&b|OՙFH *w=LhÕ Gi@!OX0rp+`5 d#dDzU1 եԬ s6rI(@)z &Z#4#ZzTYRV@'#x7e@Iǭ/J53%ZZYi.: %(iGcyNxHk- r1 #Zrm%UJp')PθvTs˄ԨRJHˢTq. _!!K.$Q'"Lmw^s?lr@#1{.y|dևsd.h3jTnj9$QpFvIīCkdb#B!> ƽ_By3~*q(>?"%zSThYP4gq͛0yZ(ږ5\!wF:/& {~7y'Krӵސ#ֵZ6,hP=͡F(0ܼMrVwXRDZ5秆7mɺ|nms~ıɤM  "Ж K,5QED =x]FCSB8_j.fcJ 8)+~V ZQvY1E>+;-&-*#Le"PFny% $"̗6jZ:b@頁01Ps rFPA[69VO݈t%|01l|ɃR o=RpƂT Dִ7JۇZ\) ]G .- k` ,WÛ^69dœ3Zw{ 2g}87? ra+(n2DAjRG]RpS2^'ݩ`ٻ6$W,vgʌ aa{aэDU%K&)$%/1uJ`CLVeėqdddcΜ>k#x S1|1(om'y޴Q)VڙӉ2dn->iv'dη<+M؈Cx_?[0O mc\h-jc 7l .Mh.fN7Myކ:ת^5e yr̼yI$1cհfEvY-o/nXċK*~4P x֗{et}tPRd~-$\Y"CZ s ZpF%goZ5P@9v}s 0t/fO]5׋jtFKln A}P{q,eA[HkFj0įԗC'X4t6!MO]W0IiR XnPh 0PbE|'7/q/W2kszwt92w@@g1 @ IA0 Pü=Q+ ؼILP> ) X\ЗЬ?щ c!14 5|]%2$cRwi:Iw'=TQ 99{*Dqsksyq=7npű}K} dͬnghC@*X8;!ӶǨS<ʬ1IjUF$U6ARM=\Ze cͭZe_U4`:?3} Ͷ"@!ZV:qLx Vĉ~ 塊3T+?s9OsMmKS?o|#{]62 {[eJe?j̑#JGkd6|:yssٖ$ܣKQ@SAPQj8LK. 5 5ae WZ6H,/Сz!weCރ}f9 k>4*1mYL U_޺#N賳hsz#wHX4?$cg]àOZkĕA{\Ivd-zђQbBM h %iGQ>2kG#,xV\d -V~)݇E)2=uLR5:G3I@=a2C:(Mݲzc'[Ip5ɰ :}ZVxlŕh].+h(A}Ǖ-4? x`Q#<..UH Fgt%1g3 Nіp:ۜzhށcBq%\11mIECi}J~=g/1l9ފ*6gkr:dkrX=pbK<ޞ8@uv_VHJ]:0-XَcR*v֖ݼ2Br|n ;ʹ-x B~指Cr+hakx|?!i

3$ꢟȳUì{X_Y0_~|[F EOϗ-M⓿]Ǝ.o/!跿l㗇ES11[f|6W~}  R=}t7ؘ*o]qq3P- T; ܠ9vQUCQ SHLژ ʃvAn?p .d8}y1FV8 B:5S?ߟE" zYmnfWfteBp9hY,|lZDhBtHb470y.ֺߡ M-ą&BjPv:ah . I[jm]רJ`ܪT^!O -},|ɂ $@ȈwY ":qm=1tdJ)o)E73_lvkzh7sV.-Dd9a0t \ceBٶIh/&m{m= T?S|/. xHJXC \g FR϶T%ݝ Wc$HV98İT5exyhE-Ɯ5]sp Bxp}Ӱ1\e)Md c0C$-%*QO;-fKtylJAq_IƳk~64?I~\Ǔ'qO: vz}ϝ0JQϙ@I&;ũalI0f4IODz>.yސ}N]FQRGMp|/~ .ޏhd &J%)y-JRq~eD_`}I&e_,J4&P%:|3P*rsWDPD C+!Y&뾨L}K\rN0L$L1krv#*o%-Amc/ǺVcG]kMB ,ŵxUZₕ1.iUmyY97׃y&1W5-U' #߯Kj@#Rjx{mDƲ{͵U;ڪ0}XP*b7Ũќ Tt6zaܴبO$quԇJ{ru>i1!:,ZcyׇnoIp66@3 Oμ7F1>Vݘl&1WENz3ˣ:m4f Zބ=ѝøĕF$^-nnUFՙMZiyKk̐J; '}ƃ+ImkSw2q3(vyLE2$hr .Xy͟ #Z6yn" eR_?߻$B$!ѦS6ϽTL+UnT6kh=$,C!btbd6%=f^w WоyB!2\>@2E Dn ѫ@dZ*485FS 2 7q=sgL@copHEpr.PqTݯ's`}OL}}ghC@ Z><[(n t#꟠xZ5hM6.ʪ*ΈZeiN 6K̡^NHZcB*J.kB\{PXe"$S"\E9#jɃG͹~7mRM$K]&(6"&H O̒ F1lF1{36Dbӌn5[4I#p1h"<>@|*`f%LѻS?DzXwCIǓO9C.IIgS{6.oVb3eMSӸDMad͖1-ݍc1TXwOG-8rC[,$^̼4@KG2x98o9F|zys]\^}/SpzBAꟈ=r g9*1(f@nj939ls&)K2-jU"jL*@C)P^zQF󇛨3p?Y!EEWX\DC XIu)Wͪhr{sbqf]+}?2tVfn]N) * ޠpdHowVCMδBr`=`brrP*]J% NMZNԫtDD4 TX*ҙF2ҡ]65!k/{' P3T*P?6ABr@R]q'yA3 ܱ he/^`t˸hiSf#d"u,֪AE4sVd@Ц3!sfK,1A i)xIﵝ|j?eL%AnR,+Jˍ%ە/>I;˅r":DŽ,ԙLrK-逺bLM ٧ŧ֥^0JuK07~:_)ALE1o*Q3T[>N*W]FX88vl+ı~Y쾘~BeH草"nM k@m Bn'XS܆KW,,1F/nIQV<yfy(HGi b8z?1;)ǔQL0ȥe)^LY&TB8&=Nb@QK" "8}@bIfMМ㾄˥RrspQh( [CfP&UB~F޿?`}-$S|,d-|*Tq赗i&^>'6h|)h?h{4T#o/AC[" [Lp0巏~>gϷ"T$z2RxpJG`k?4agIv黚䙚 Ou96zF%p5, UP:KMJwjc Aqkٮw)*göӔ6)PNt qm.lw1e4h_ڒ7qE_ ,weIzG%@vbý3"O5TS&nw7HIEjeaـ-/"#"r+9-Ab#AoW:^fŧgLu${$ԉElW$W\<К 3)q>I_<ϚsUշͮWqNj*L#bE@Pۨid^"%QWpxl$#|j|n34plM_8t3/7T?`'~hʑ2ȫ k %$in7ׯ;ϸ-ш25$/=ѪϻA`/W\x]-_W,UnuF^ʲr&/UOW;ݐ}5':ಲJ (sWsMqyb N֜99kx! y!T/Ak0z=|"M<&DdQ'*ӌGWҨ\VkoswF%E^f\# m!GV.{fΝ1```MyK`aT?ؓv߀V+id˃W?g۱׫jɛlSK׫~j}pCRyNo9 R.=Р ׫W^DQVk^p O5e"$bYdѲ`uҊ{n=B분=X.nB \.7d TQSAP$3͘eA=ϸ6daɔz4J]_X l[una%nS^.5=́`yu`˘Ъhp'm6 {Fݣ?tQW6q)%]k>k( 9'#9ߔ(}"hARIvqHW$> ZAH9O懊9:|4'jh;d`2Kf=X>[To(~hj;nz?xvxGe2Gi4>ow#v_o'B-1EX7$2{NYn!sJg!jC E[kE*1ؔ4()BwFV\`5SOurj5C;;o5Κיi>㧃Ri_$g˿ԣC\xgWӰCow1]BQ]q;4<Uմ^on띪:Uorlru$֓EL5r%pW5 QuŠDt>v{<.q{h=ͼP!!s$S\fEߝ;N#p_p[pH@89J@hz9>^2d8ܬӛ7,{QT=LǠ4LъrR˨O2hiln<"C\fs F\|i T&EYH$]J$Zsx׆Y_SRTkC?L)zLVE'9RK3%>nZ-6/ƍ|@avi?؈`/O73{DhFՏ/Ūc9 ~ۈb19xQ=⯹ͲQԉ>-E=јjtwٍ:,ըAw ztt;.~6rnR?jw }#**rpšGRсw ꭓۿջamduۡ/BmHʘ͖,^ҍV<.W!D [Y-y5]; }p8Ct@PY{ZAJrU9 h$CXHP볢f끂Huh P=8+Q<ڏG5ݗ ux^QL4=O3A_ptHlO5bG\JF}OiK&vz!&{5]~_o+w{R*D aݪ/xsJt(BkXS$(ù7RW6#T Mp&PF":\& qCi!^Jr[+Dr&BQc2Rev+m2G*b(iE ˝ %@[ S6S z#Ya3=~;_9)s*ZCEﺵ7 m%&KB[+2.Q'ei -,3)@_ٻ1F{+4iqޖ &aqk]-,m SC=|[B.~9H/\_̞ օ C۞SN[*.@EW'g&T;LCL!f(a~|\lPiVIKCa@tH‘U&;ϧ` {?ׇ .)n_1\yb @T6Q5G*/螻u]A9,ퟣ{?}63*%Jcg^ǔ@b+ Npƌvj'TQMs7^+VƵdʸ^+Bgl%#3Is@:kUɷ,,2v{H@3pg gLƂ4d0+i2mL"O'omӊ[MKL7П'i6\Y >8'%v,#D-śM^Hn<@G~oxIl@t8KYo8]l~@Fs8п*OI D-c a˫`ﭰsy Y+lVx x8TtRK¤j^8rAa&⫑ QPfe!wE;ڬVF 8bfF6c !@* mS>925Z3K"Δ)DcTlS54՘weaWBB hC%uKe)-$)USpz)˔iMvfG(3eyMS򘝹jAi/%֣`@TlS͐ F)3}06?9wju#N0swҢKG{r;iK8T*:wd^FUƒVI\(U|A+~uWfmQp?;Vi&0f 3Ӛ9ҧirߤqolyr%⤡w/=7=oD^^_ ҕۆ+#t>;Y1uiU7~|T˫O*05E kq33 C5i6-R)pt /z:O oNQMk9wQX\mRLsѼCm.~?j8w{y}`n@GvA/nμ ]߱<1&=]e})j0,T6Zn.m|-0X_][+.O{/wfdnU1 C {(Cn]ehc^-YQ"֭r) ⛖ c[2(MJz#p)=xw4#oO{(mUdcw:޼z˖oyy!fQ}rf)pv|P1q'Oߜ^Ey\^. \E z׊=ӒZcu` Ȋ;K?V~-7Nq1Rnmg;ݘz/w}=r+ 8Sْ^`r7k?A rC'/$ݛħ<ӸI|ʺ97mMF/Il4'Z$ruIi%%t=NvBld=% x^^~^NIEJrׅ9l2Y(y^ŗ rAD$Xf,g/" (SAȵ**.U;tD`FN6L+P:j2RVK1~m3:@+>var/home/core/zuul-output/logs/kubelet.log0000644000000000000000001747305715134640465017723 0ustar rootrootJan 23 08:53:08 crc systemd[1]: Starting Kubernetes Kubelet... Jan 23 08:53:08 crc kubenswrapper[5117]: Flag --container-runtime-endpoint has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:53:08 crc kubenswrapper[5117]: Flag --minimum-container-ttl-duration has been deprecated, Use --eviction-hard or --eviction-soft instead. Will be removed in a future version. Jan 23 08:53:08 crc kubenswrapper[5117]: Flag --volume-plugin-dir has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:53:08 crc kubenswrapper[5117]: Flag --register-with-taints has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:53:08 crc kubenswrapper[5117]: Flag --pod-infra-container-image has been deprecated, will be removed in 1.35. Image garbage collector will get sandbox image information from CRI. Jan 23 08:53:08 crc kubenswrapper[5117]: Flag --system-reserved has been deprecated, This parameter should be set via the config file specified by the Kubelet's --config flag. See https://kubernetes.io/docs/tasks/administer-cluster/kubelet-config-file/ for more information. Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.519163 5117 server.go:212] "--pod-infra-container-image will not be pruned by the image garbage collector in kubelet and should also be set in the remote runtime" Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522679 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522703 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522709 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522714 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522718 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522723 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522729 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522736 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522741 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522749 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522756 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522761 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522765 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522769 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522774 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522778 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522783 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522788 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522793 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522797 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522801 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522805 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522810 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522814 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522818 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522823 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522827 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522831 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522835 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522852 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522859 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522864 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522869 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522874 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522880 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522885 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522889 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522894 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522899 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522903 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522908 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522915 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522923 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522929 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522934 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522938 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522943 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522947 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522951 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522955 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522959 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522963 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522967 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522971 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522974 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522982 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522986 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522990 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522994 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.522999 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523002 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523006 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523011 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523016 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523020 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523024 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523028 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523031 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523035 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523041 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523045 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523049 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523055 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523060 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523064 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523068 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523071 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523076 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523080 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523084 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523088 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523092 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523095 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523099 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523103 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523106 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523690 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523700 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523705 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523710 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523715 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523719 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523723 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523727 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523732 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523736 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523740 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523745 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523749 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523753 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523758 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523762 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523766 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523770 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523776 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523780 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523784 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523787 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523791 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523795 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523800 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523804 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523808 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523812 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523815 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523819 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523823 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523827 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523831 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523835 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523841 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523845 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523849 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523853 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523859 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523865 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523870 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523874 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523878 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523883 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523888 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523892 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523897 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523902 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523906 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523913 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523918 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523925 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523929 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523933 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523938 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523942 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523947 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523953 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523958 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523962 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523968 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523972 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523977 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523981 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523985 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523990 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.523994 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524000 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524004 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524008 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524013 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524020 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524024 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524029 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524033 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524037 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524042 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524046 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524050 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524054 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524058 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524063 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524067 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524072 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524078 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.524083 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524229 5117 flags.go:64] FLAG: --address="0.0.0.0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524258 5117 flags.go:64] FLAG: --allowed-unsafe-sysctls="[]" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524271 5117 flags.go:64] FLAG: --anonymous-auth="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524279 5117 flags.go:64] FLAG: --application-metrics-count-limit="100" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524287 5117 flags.go:64] FLAG: --authentication-token-webhook="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524292 5117 flags.go:64] FLAG: --authentication-token-webhook-cache-ttl="2m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524299 5117 flags.go:64] FLAG: --authorization-mode="AlwaysAllow" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524307 5117 flags.go:64] FLAG: --authorization-webhook-cache-authorized-ttl="5m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524312 5117 flags.go:64] FLAG: --authorization-webhook-cache-unauthorized-ttl="30s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524318 5117 flags.go:64] FLAG: --boot-id-file="/proc/sys/kernel/random/boot_id" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524323 5117 flags.go:64] FLAG: --bootstrap-kubeconfig="/etc/kubernetes/kubeconfig" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524328 5117 flags.go:64] FLAG: --cert-dir="/var/lib/kubelet/pki" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524333 5117 flags.go:64] FLAG: --cgroup-driver="cgroupfs" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524344 5117 flags.go:64] FLAG: --cgroup-root="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524349 5117 flags.go:64] FLAG: --cgroups-per-qos="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524354 5117 flags.go:64] FLAG: --client-ca-file="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524358 5117 flags.go:64] FLAG: --cloud-config="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524365 5117 flags.go:64] FLAG: --cloud-provider="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524370 5117 flags.go:64] FLAG: --cluster-dns="[]" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524377 5117 flags.go:64] FLAG: --cluster-domain="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524382 5117 flags.go:64] FLAG: --config="/etc/kubernetes/kubelet.conf" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524387 5117 flags.go:64] FLAG: --config-dir="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524391 5117 flags.go:64] FLAG: --container-hints="/etc/cadvisor/container_hints.json" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524396 5117 flags.go:64] FLAG: --container-log-max-files="5" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524403 5117 flags.go:64] FLAG: --container-log-max-size="10Mi" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524408 5117 flags.go:64] FLAG: --container-runtime-endpoint="/var/run/crio/crio.sock" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524413 5117 flags.go:64] FLAG: --containerd="/run/containerd/containerd.sock" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524418 5117 flags.go:64] FLAG: --containerd-namespace="k8s.io" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524423 5117 flags.go:64] FLAG: --contention-profiling="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524428 5117 flags.go:64] FLAG: --cpu-cfs-quota="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524433 5117 flags.go:64] FLAG: --cpu-cfs-quota-period="100ms" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524439 5117 flags.go:64] FLAG: --cpu-manager-policy="none" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524443 5117 flags.go:64] FLAG: --cpu-manager-policy-options="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524454 5117 flags.go:64] FLAG: --cpu-manager-reconcile-period="10s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524459 5117 flags.go:64] FLAG: --enable-controller-attach-detach="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524463 5117 flags.go:64] FLAG: --enable-debugging-handlers="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524468 5117 flags.go:64] FLAG: --enable-load-reader="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524472 5117 flags.go:64] FLAG: --enable-server="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524477 5117 flags.go:64] FLAG: --enforce-node-allocatable="[pods]" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524486 5117 flags.go:64] FLAG: --event-burst="100" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524490 5117 flags.go:64] FLAG: --event-qps="50" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524495 5117 flags.go:64] FLAG: --event-storage-age-limit="default=0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524500 5117 flags.go:64] FLAG: --event-storage-event-limit="default=0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524504 5117 flags.go:64] FLAG: --eviction-hard="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524510 5117 flags.go:64] FLAG: --eviction-max-pod-grace-period="0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524518 5117 flags.go:64] FLAG: --eviction-minimum-reclaim="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524523 5117 flags.go:64] FLAG: --eviction-pressure-transition-period="5m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524527 5117 flags.go:64] FLAG: --eviction-soft="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524532 5117 flags.go:64] FLAG: --eviction-soft-grace-period="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524540 5117 flags.go:64] FLAG: --exit-on-lock-contention="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524546 5117 flags.go:64] FLAG: --experimental-allocatable-ignore-eviction="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524552 5117 flags.go:64] FLAG: --experimental-mounter-path="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524558 5117 flags.go:64] FLAG: --fail-cgroupv1="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524563 5117 flags.go:64] FLAG: --fail-swap-on="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524568 5117 flags.go:64] FLAG: --feature-gates="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524575 5117 flags.go:64] FLAG: --file-check-frequency="20s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524580 5117 flags.go:64] FLAG: --global-housekeeping-interval="1m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524584 5117 flags.go:64] FLAG: --hairpin-mode="promiscuous-bridge" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524589 5117 flags.go:64] FLAG: --healthz-bind-address="127.0.0.1" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524593 5117 flags.go:64] FLAG: --healthz-port="10248" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524597 5117 flags.go:64] FLAG: --help="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524602 5117 flags.go:64] FLAG: --hostname-override="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524606 5117 flags.go:64] FLAG: --housekeeping-interval="10s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524611 5117 flags.go:64] FLAG: --http-check-frequency="20s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524617 5117 flags.go:64] FLAG: --image-credential-provider-bin-dir="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524622 5117 flags.go:64] FLAG: --image-credential-provider-config="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524626 5117 flags.go:64] FLAG: --image-gc-high-threshold="85" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524630 5117 flags.go:64] FLAG: --image-gc-low-threshold="80" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524634 5117 flags.go:64] FLAG: --image-service-endpoint="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524639 5117 flags.go:64] FLAG: --kernel-memcg-notification="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524644 5117 flags.go:64] FLAG: --kube-api-burst="100" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524648 5117 flags.go:64] FLAG: --kube-api-content-type="application/vnd.kubernetes.protobuf" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524654 5117 flags.go:64] FLAG: --kube-api-qps="50" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524659 5117 flags.go:64] FLAG: --kube-reserved="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524664 5117 flags.go:64] FLAG: --kube-reserved-cgroup="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524668 5117 flags.go:64] FLAG: --kubeconfig="/var/lib/kubelet/kubeconfig" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524673 5117 flags.go:64] FLAG: --kubelet-cgroups="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524685 5117 flags.go:64] FLAG: --local-storage-capacity-isolation="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524689 5117 flags.go:64] FLAG: --lock-file="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524694 5117 flags.go:64] FLAG: --log-cadvisor-usage="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524699 5117 flags.go:64] FLAG: --log-flush-frequency="5s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524706 5117 flags.go:64] FLAG: --log-json-info-buffer-size="0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524714 5117 flags.go:64] FLAG: --log-json-split-stream="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524719 5117 flags.go:64] FLAG: --log-text-info-buffer-size="0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524724 5117 flags.go:64] FLAG: --log-text-split-stream="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524729 5117 flags.go:64] FLAG: --logging-format="text" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524734 5117 flags.go:64] FLAG: --machine-id-file="/etc/machine-id,/var/lib/dbus/machine-id" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524739 5117 flags.go:64] FLAG: --make-iptables-util-chains="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524744 5117 flags.go:64] FLAG: --manifest-url="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524748 5117 flags.go:64] FLAG: --manifest-url-header="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524756 5117 flags.go:64] FLAG: --max-housekeeping-interval="15s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524761 5117 flags.go:64] FLAG: --max-open-files="1000000" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524768 5117 flags.go:64] FLAG: --max-pods="110" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524773 5117 flags.go:64] FLAG: --maximum-dead-containers="-1" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524778 5117 flags.go:64] FLAG: --maximum-dead-containers-per-container="1" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524782 5117 flags.go:64] FLAG: --memory-manager-policy="None" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524787 5117 flags.go:64] FLAG: --minimum-container-ttl-duration="6m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524797 5117 flags.go:64] FLAG: --minimum-image-ttl-duration="2m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524802 5117 flags.go:64] FLAG: --node-ip="192.168.126.11" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524807 5117 flags.go:64] FLAG: --node-labels="node-role.kubernetes.io/control-plane=,node-role.kubernetes.io/master=,node.openshift.io/os_id=rhel" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524896 5117 flags.go:64] FLAG: --node-status-max-images="50" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524903 5117 flags.go:64] FLAG: --node-status-update-frequency="10s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524908 5117 flags.go:64] FLAG: --oom-score-adj="-999" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524914 5117 flags.go:64] FLAG: --pod-cidr="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524919 5117 flags.go:64] FLAG: --pod-infra-container-image="quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:cc2b30e70040205c2536d01ae5c850be1ed2d775cf13249e50328e5085777977" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524928 5117 flags.go:64] FLAG: --pod-manifest-path="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524933 5117 flags.go:64] FLAG: --pod-max-pids="-1" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524937 5117 flags.go:64] FLAG: --pods-per-core="0" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524942 5117 flags.go:64] FLAG: --port="10250" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524951 5117 flags.go:64] FLAG: --protect-kernel-defaults="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524956 5117 flags.go:64] FLAG: --provider-id="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524960 5117 flags.go:64] FLAG: --qos-reserved="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524966 5117 flags.go:64] FLAG: --read-only-port="10255" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524971 5117 flags.go:64] FLAG: --register-node="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524979 5117 flags.go:64] FLAG: --register-schedulable="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524984 5117 flags.go:64] FLAG: --register-with-taints="node-role.kubernetes.io/master=:NoSchedule" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.524995 5117 flags.go:64] FLAG: --registry-burst="10" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525000 5117 flags.go:64] FLAG: --registry-qps="5" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525005 5117 flags.go:64] FLAG: --reserved-cpus="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525010 5117 flags.go:64] FLAG: --reserved-memory="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525016 5117 flags.go:64] FLAG: --resolv-conf="/etc/resolv.conf" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525021 5117 flags.go:64] FLAG: --root-dir="/var/lib/kubelet" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525027 5117 flags.go:64] FLAG: --rotate-certificates="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525033 5117 flags.go:64] FLAG: --rotate-server-certificates="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525037 5117 flags.go:64] FLAG: --runonce="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525042 5117 flags.go:64] FLAG: --runtime-cgroups="/system.slice/crio.service" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525048 5117 flags.go:64] FLAG: --runtime-request-timeout="2m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525053 5117 flags.go:64] FLAG: --seccomp-default="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525058 5117 flags.go:64] FLAG: --serialize-image-pulls="true" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525063 5117 flags.go:64] FLAG: --storage-driver-buffer-duration="1m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525069 5117 flags.go:64] FLAG: --storage-driver-db="cadvisor" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525075 5117 flags.go:64] FLAG: --storage-driver-host="localhost:8086" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525080 5117 flags.go:64] FLAG: --storage-driver-password="root" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525084 5117 flags.go:64] FLAG: --storage-driver-secure="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525089 5117 flags.go:64] FLAG: --storage-driver-table="stats" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525093 5117 flags.go:64] FLAG: --storage-driver-user="root" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525097 5117 flags.go:64] FLAG: --streaming-connection-idle-timeout="4h0m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525102 5117 flags.go:64] FLAG: --sync-frequency="1m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525107 5117 flags.go:64] FLAG: --system-cgroups="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525112 5117 flags.go:64] FLAG: --system-reserved="cpu=200m,ephemeral-storage=350Mi,memory=350Mi" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525120 5117 flags.go:64] FLAG: --system-reserved-cgroup="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525126 5117 flags.go:64] FLAG: --tls-cert-file="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525151 5117 flags.go:64] FLAG: --tls-cipher-suites="[]" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525159 5117 flags.go:64] FLAG: --tls-min-version="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525164 5117 flags.go:64] FLAG: --tls-private-key-file="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525168 5117 flags.go:64] FLAG: --topology-manager-policy="none" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525174 5117 flags.go:64] FLAG: --topology-manager-policy-options="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525180 5117 flags.go:64] FLAG: --topology-manager-scope="container" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525186 5117 flags.go:64] FLAG: --v="2" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525194 5117 flags.go:64] FLAG: --version="false" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525201 5117 flags.go:64] FLAG: --vmodule="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525207 5117 flags.go:64] FLAG: --volume-plugin-dir="/etc/kubernetes/kubelet-plugins/volume/exec" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525213 5117 flags.go:64] FLAG: --volume-stats-agg-period="1m0s" Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525345 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525353 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525358 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525362 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525366 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525370 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525374 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525379 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525383 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525387 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525392 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525397 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525401 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525405 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525409 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525414 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525419 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525424 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525428 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525432 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525436 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525440 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525444 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525448 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525453 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525457 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525461 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525465 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525471 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525477 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525482 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525486 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525491 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525496 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525500 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525504 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525508 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525512 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525516 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525520 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525525 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525531 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525536 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525542 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525546 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525551 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525555 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525559 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525564 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525568 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525573 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525577 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525581 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525586 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525590 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525594 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525598 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525603 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525608 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525612 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525617 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525621 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525625 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525630 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525634 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525639 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525644 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525648 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525652 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525656 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525661 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525665 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525670 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525674 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525679 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525683 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525690 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525694 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525699 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525704 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525708 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525713 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525717 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525722 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525727 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.525731 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.525939 5117 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.537304 5117 server.go:530] "Kubelet version" kubeletVersion="v1.33.5" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.537355 5117 server.go:532] "Golang settings" GOGC="" GOMAXPROCS="" GOTRACEBACK="" Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537441 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537456 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537462 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537467 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537472 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537476 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537481 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537485 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537490 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537494 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537499 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537503 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537507 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537513 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537518 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537523 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537528 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537532 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537537 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537543 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537547 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537552 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537556 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537560 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537564 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537569 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537573 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537577 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537581 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537585 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537589 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537595 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537600 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537604 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537608 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537612 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537616 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537620 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537624 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537629 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537633 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537638 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537642 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537648 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537652 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537656 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537661 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537665 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537669 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537674 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537678 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537683 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537687 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537691 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537696 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537700 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537704 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537709 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537713 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537718 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537723 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537727 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537731 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537735 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537742 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537746 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537751 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537755 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537760 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537764 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537769 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537773 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537777 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537782 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537787 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537792 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537796 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537800 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537805 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537809 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537813 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537818 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537823 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537828 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537832 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.537836 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.537845 5117 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538041 5117 feature_gate.go:328] unrecognized feature gate: ImageModeStatusReporting Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538054 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstallIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538058 5117 feature_gate.go:328] unrecognized feature gate: AdditionalRoutingCapabilities Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538064 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIOperatorDisableMachineHealthCheckController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538071 5117 feature_gate.go:328] unrecognized feature gate: NewOLMCatalogdAPIV1Metas Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538076 5117 feature_gate.go:328] unrecognized feature gate: ConsolePluginContentSecurityPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538080 5117 feature_gate.go:328] unrecognized feature gate: NewOLMWebhookProviderOpenshiftServiceCA Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538085 5117 feature_gate.go:328] unrecognized feature gate: AzureClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538089 5117 feature_gate.go:328] unrecognized feature gate: NewOLMPreflightPermissionChecks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538097 5117 feature_gate.go:349] Setting deprecated feature gate KMSv1=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538104 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538109 5117 feature_gate.go:328] unrecognized feature gate: OVNObservability Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538114 5117 feature_gate.go:328] unrecognized feature gate: RouteAdvertisements Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538118 5117 feature_gate.go:328] unrecognized feature gate: MixedCPUsAllocation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538122 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538126 5117 feature_gate.go:328] unrecognized feature gate: ExternalSnapshotMetadata Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538148 5117 feature_gate.go:328] unrecognized feature gate: ShortCertRotation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538153 5117 feature_gate.go:328] unrecognized feature gate: NewOLMOwnSingleNamespace Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538157 5117 feature_gate.go:328] unrecognized feature gate: VSphereMultiNetworks Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538163 5117 feature_gate.go:328] unrecognized feature gate: VSphereConfigurableMaxAllowedBlockVolumesPerNode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538168 5117 feature_gate.go:328] unrecognized feature gate: ClusterMonitoringConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538172 5117 feature_gate.go:328] unrecognized feature gate: NetworkDiagnosticsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538176 5117 feature_gate.go:328] unrecognized feature gate: HighlyAvailableArbiter Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538181 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538185 5117 feature_gate.go:328] unrecognized feature gate: BootcNodeManagement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538189 5117 feature_gate.go:328] unrecognized feature gate: MachineConfigNodes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538194 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDC Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538198 5117 feature_gate.go:328] unrecognized feature gate: PinnedImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538202 5117 feature_gate.go:328] unrecognized feature gate: ExternalOIDCWithUIDAndExtraClaimMappings Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538207 5117 feature_gate.go:328] unrecognized feature gate: InsightsOnDemandDataGather Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538211 5117 feature_gate.go:328] unrecognized feature gate: EtcdBackendQuota Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538215 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538219 5117 feature_gate.go:328] unrecognized feature gate: KMSEncryptionProvider Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538223 5117 feature_gate.go:328] unrecognized feature gate: BootImageSkewEnforcement Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538228 5117 feature_gate.go:328] unrecognized feature gate: Example Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538232 5117 feature_gate.go:328] unrecognized feature gate: UpgradeStatus Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538236 5117 feature_gate.go:328] unrecognized feature gate: NutanixMultiSubnets Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538241 5117 feature_gate.go:328] unrecognized feature gate: AzureMultiDisk Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538245 5117 feature_gate.go:328] unrecognized feature gate: AutomatedEtcdBackup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538249 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538253 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerification Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538259 5117 feature_gate.go:328] unrecognized feature gate: NewOLM Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538263 5117 feature_gate.go:328] unrecognized feature gate: NetworkLiveMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538269 5117 feature_gate.go:328] unrecognized feature gate: DyanmicServiceEndpointIBMCloud Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538274 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpointsInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538278 5117 feature_gate.go:328] unrecognized feature gate: ImageStreamImportMode Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538282 5117 feature_gate.go:328] unrecognized feature gate: MultiArchInstallAzure Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538286 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPIController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538291 5117 feature_gate.go:328] unrecognized feature gate: AdminNetworkPolicy Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538295 5117 feature_gate.go:328] unrecognized feature gate: CPMSMachineNamePrefix Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538299 5117 feature_gate.go:328] unrecognized feature gate: VSphereMixedNodeEnv Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538304 5117 feature_gate.go:328] unrecognized feature gate: DualReplica Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538309 5117 feature_gate.go:328] unrecognized feature gate: SetEIPForNLBIngressController Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538313 5117 feature_gate.go:328] unrecognized feature gate: AWSServiceLBNetworkSecurityGroup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538318 5117 feature_gate.go:328] unrecognized feature gate: AWSClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538322 5117 feature_gate.go:328] unrecognized feature gate: OpenShiftPodSecurityAdmission Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538326 5117 feature_gate.go:328] unrecognized feature gate: AzureWorkloadIdentity Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538332 5117 feature_gate.go:351] Setting GA feature gate ServiceAccountTokenNodeBinding=true. It will be removed in a future release. Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538337 5117 feature_gate.go:328] unrecognized feature gate: BuildCSIVolumes Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538342 5117 feature_gate.go:328] unrecognized feature gate: ClusterVersionOperatorConfiguration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538347 5117 feature_gate.go:328] unrecognized feature gate: IrreconcilableMachineConfig Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538351 5117 feature_gate.go:328] unrecognized feature gate: GCPClusterHostedDNSInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538356 5117 feature_gate.go:328] unrecognized feature gate: GCPCustomAPIEndpoints Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538360 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImages Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538364 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538369 5117 feature_gate.go:328] unrecognized feature gate: SignatureStores Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538374 5117 feature_gate.go:328] unrecognized feature gate: GatewayAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538378 5117 feature_gate.go:328] unrecognized feature gate: PreconfiguredUDNAddresses Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538383 5117 feature_gate.go:328] unrecognized feature gate: VSphereHostVMGroupZonal Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538388 5117 feature_gate.go:328] unrecognized feature gate: NetworkSegmentation Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538392 5117 feature_gate.go:328] unrecognized feature gate: ManagedBootImagesvSphere Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538396 5117 feature_gate.go:328] unrecognized feature gate: SigstoreImageVerificationPKI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538401 5117 feature_gate.go:328] unrecognized feature gate: VolumeGroupSnapshot Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538405 5117 feature_gate.go:328] unrecognized feature gate: AWSDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538410 5117 feature_gate.go:328] unrecognized feature gate: MultiDiskSetup Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538414 5117 feature_gate.go:328] unrecognized feature gate: AlibabaPlatform Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538420 5117 feature_gate.go:328] unrecognized feature gate: MachineAPIMigration Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538424 5117 feature_gate.go:328] unrecognized feature gate: Example2 Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538428 5117 feature_gate.go:328] unrecognized feature gate: InsightsConfigAPI Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538433 5117 feature_gate.go:328] unrecognized feature gate: DNSNameResolver Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538437 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerDynamicConfigurationManager Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538442 5117 feature_gate.go:328] unrecognized feature gate: ClusterAPIInstall Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538447 5117 feature_gate.go:328] unrecognized feature gate: MetricsCollectionProfiles Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538452 5117 feature_gate.go:328] unrecognized feature gate: IngressControllerLBSubnetsAWS Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538457 5117 feature_gate.go:328] unrecognized feature gate: AzureDedicatedHosts Jan 23 08:53:08 crc kubenswrapper[5117]: W0123 08:53:08.538461 5117 feature_gate.go:328] unrecognized feature gate: NoRegistryClusterOperations Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.538468 5117 feature_gate.go:384] feature gates: {map[DynamicResourceAllocation:false EventedPLEG:false ImageVolume:true KMSv1:true MaxUnavailableStatefulSet:false MinimumKubeletVersion:false MutatingAdmissionPolicy:false NodeSwap:false ProcMountType:true RouteExternalCertificate:true SELinuxMount:false ServiceAccountTokenNodeBinding:true StoragePerformantSecurityPolicy:true TranslateStreamCloseWebsocketRequests:false UserNamespacesPodSecurityStandards:true UserNamespacesSupport:true VolumeAttributesClass:false]} Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.538977 5117 server.go:962] "Client rotation is on, will bootstrap in background" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.542264 5117 bootstrap.go:266] "Unhandled Error" err="part of the existing bootstrap client certificate in /var/lib/kubelet/kubeconfig is expired: 2025-12-03 08:27:53 +0000 UTC" logger="UnhandledError" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.545614 5117 bootstrap.go:101] "Use the bootstrap credentials to request a cert, and set kubeconfig to point to the certificate dir" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.545773 5117 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-client-current.pem" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.546457 5117 server.go:1019] "Starting client certificate rotation" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.546604 5117 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.546664 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.561840 5117 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.564303 5117 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.572999 5117 dynamic_cafile_content.go:161] "Starting controller" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.581628 5117 log.go:25] "Validated CRI v1 runtime API" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.611352 5117 log.go:25] "Validated CRI v1 image API" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.618794 5117 server.go:1452] "Using cgroup driver setting received from the CRI runtime" cgroupDriver="systemd" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.622248 5117 fs.go:135] Filesystem UUIDs: map[19e76f87-96b8-4794-9744-0b33dca22d5b:/dev/vda3 2026-01-23-08-47-11-00:/dev/sr0 5eb7c122-420e-4494-80ec-41664070d7b6:/dev/vda4 7B77-95E7:/dev/vda2] Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.622302 5117 fs.go:136] Filesystem partitions: map[/dev/shm:{mountpoint:/dev/shm major:0 minor:22 fsType:tmpfs blockSize:0} /dev/vda3:{mountpoint:/boot major:252 minor:3 fsType:ext4 blockSize:0} /dev/vda4:{mountpoint:/var major:252 minor:4 fsType:xfs blockSize:0} /run:{mountpoint:/run major:0 minor:24 fsType:tmpfs blockSize:0} /run/user/1000:{mountpoint:/run/user/1000 major:0 minor:45 fsType:tmpfs blockSize:0} /tmp:{mountpoint:/tmp major:0 minor:31 fsType:tmpfs blockSize:0} /var/lib/etcd:{mountpoint:/var/lib/etcd major:0 minor:46 fsType:tmpfs blockSize:0} composefs_0-33:{mountpoint:/ major:0 minor:33 fsType:overlay blockSize:0}] Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.638663 5117 manager.go:217] Machine: {Timestamp:2026-01-23 08:53:08.63743305 +0000 UTC m=+0.393558096 CPUVendorID:AuthenticAMD NumCores:12 NumPhysicalCores:1 NumSockets:12 CpuFrequency:2800000 MemoryCapacity:33649930240 SwapCapacity:0 MemoryByType:map[] NVMInfo:{MemoryModeCapacity:0 AppDirectModeCapacity:0 AvgPowerBudget:0} HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] MachineID:80bc4fba336e4ca1bc9d28a8be52a356 SystemUUID:ff88a58d-633e-4023-819b-155f3656b514 BootID:f5d437ef-57ca-47db-98fb-05e744e6bfaa Filesystems:[{Device:/run DeviceMajor:0 DeviceMinor:24 Capacity:6729986048 Type:vfs Inodes:819200 HasInodes:true} {Device:/dev/vda3 DeviceMajor:252 DeviceMinor:3 Capacity:366869504 Type:vfs Inodes:98304 HasInodes:true} {Device:/run/user/1000 DeviceMajor:0 DeviceMinor:45 Capacity:3364990976 Type:vfs Inodes:821531 HasInodes:true} {Device:/var/lib/etcd DeviceMajor:0 DeviceMinor:46 Capacity:1073741824 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/shm DeviceMajor:0 DeviceMinor:22 Capacity:16824963072 Type:vfs Inodes:4107657 HasInodes:true} {Device:/dev/vda4 DeviceMajor:252 DeviceMinor:4 Capacity:85292941312 Type:vfs Inodes:41679680 HasInodes:true} {Device:composefs_0-33 DeviceMajor:0 DeviceMinor:33 Capacity:6545408 Type:vfs Inodes:18446744073709551615 HasInodes:true} {Device:/tmp DeviceMajor:0 DeviceMinor:31 Capacity:16824967168 Type:vfs Inodes:1048576 HasInodes:true}] DiskMap:map[252:0:{Name:vda Major:252 Minor:0 Size:214748364800 Scheduler:none}] NetworkDevices:[{Name:br-ex MacAddress:fa:16:3e:0b:41:7e Speed:0 Mtu:1500} {Name:br-int MacAddress:b2:a9:9f:57:07:84 Speed:0 Mtu:1400} {Name:ens3 MacAddress:fa:16:3e:0b:41:7e Speed:-1 Mtu:1500} {Name:ens7 MacAddress:fa:16:3e:a9:00:33 Speed:-1 Mtu:1500} {Name:ens7.20 MacAddress:52:54:00:55:0d:8c Speed:-1 Mtu:1496} {Name:ens7.21 MacAddress:52:54:00:56:cd:28 Speed:-1 Mtu:1496} {Name:ens7.22 MacAddress:52:54:00:e5:39:b4 Speed:-1 Mtu:1496} {Name:eth10 MacAddress:1e:b4:10:e6:af:f0 Speed:0 Mtu:1500} {Name:ovn-k8s-mp0 MacAddress:0a:58:0a:d9:00:02 Speed:0 Mtu:1400} {Name:ovs-system MacAddress:fe:bd:b7:7f:e0:6f Speed:0 Mtu:1500} {Name:tap0 MacAddress:5a:94:ef:e4:0c:ee Speed:10 Mtu:1500}] Topology:[{Id:0 Memory:33649930240 HugePages:[{PageSize:1048576 NumPages:0} {PageSize:2048 NumPages:0}] Cores:[{Id:0 Threads:[0] Caches:[{Id:0 Size:32768 Type:Data Level:1} {Id:0 Size:32768 Type:Instruction Level:1} {Id:0 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:0 Size:16777216 Type:Unified Level:3}] SocketID:0 BookID: DrawerID:} {Id:0 Threads:[1] Caches:[{Id:1 Size:32768 Type:Data Level:1} {Id:1 Size:32768 Type:Instruction Level:1} {Id:1 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:1 Size:16777216 Type:Unified Level:3}] SocketID:1 BookID: DrawerID:} {Id:0 Threads:[10] Caches:[{Id:10 Size:32768 Type:Data Level:1} {Id:10 Size:32768 Type:Instruction Level:1} {Id:10 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:10 Size:16777216 Type:Unified Level:3}] SocketID:10 BookID: DrawerID:} {Id:0 Threads:[11] Caches:[{Id:11 Size:32768 Type:Data Level:1} {Id:11 Size:32768 Type:Instruction Level:1} {Id:11 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:11 Size:16777216 Type:Unified Level:3}] SocketID:11 BookID: DrawerID:} {Id:0 Threads:[2] Caches:[{Id:2 Size:32768 Type:Data Level:1} {Id:2 Size:32768 Type:Instruction Level:1} {Id:2 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:2 Size:16777216 Type:Unified Level:3}] SocketID:2 BookID: DrawerID:} {Id:0 Threads:[3] Caches:[{Id:3 Size:32768 Type:Data Level:1} {Id:3 Size:32768 Type:Instruction Level:1} {Id:3 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:3 Size:16777216 Type:Unified Level:3}] SocketID:3 BookID: DrawerID:} {Id:0 Threads:[4] Caches:[{Id:4 Size:32768 Type:Data Level:1} {Id:4 Size:32768 Type:Instruction Level:1} {Id:4 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:4 Size:16777216 Type:Unified Level:3}] SocketID:4 BookID: DrawerID:} {Id:0 Threads:[5] Caches:[{Id:5 Size:32768 Type:Data Level:1} {Id:5 Size:32768 Type:Instruction Level:1} {Id:5 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:5 Size:16777216 Type:Unified Level:3}] SocketID:5 BookID: DrawerID:} {Id:0 Threads:[6] Caches:[{Id:6 Size:32768 Type:Data Level:1} {Id:6 Size:32768 Type:Instruction Level:1} {Id:6 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:6 Size:16777216 Type:Unified Level:3}] SocketID:6 BookID: DrawerID:} {Id:0 Threads:[7] Caches:[{Id:7 Size:32768 Type:Data Level:1} {Id:7 Size:32768 Type:Instruction Level:1} {Id:7 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:7 Size:16777216 Type:Unified Level:3}] SocketID:7 BookID: DrawerID:} {Id:0 Threads:[8] Caches:[{Id:8 Size:32768 Type:Data Level:1} {Id:8 Size:32768 Type:Instruction Level:1} {Id:8 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:8 Size:16777216 Type:Unified Level:3}] SocketID:8 BookID: DrawerID:} {Id:0 Threads:[9] Caches:[{Id:9 Size:32768 Type:Data Level:1} {Id:9 Size:32768 Type:Instruction Level:1} {Id:9 Size:524288 Type:Unified Level:2}] UncoreCaches:[{Id:9 Size:16777216 Type:Unified Level:3}] SocketID:9 BookID: DrawerID:}] Caches:[] Distances:[10]}] CloudProvider:Unknown InstanceType:Unknown InstanceID:None} Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.638880 5117 manager_no_libpfm.go:29] cAdvisor is build without cgo and/or libpfm support. Perf event counters are not available. Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.639015 5117 manager.go:233] Version: {KernelVersion:5.14.0-570.57.1.el9_6.x86_64 ContainerOsVersion:Red Hat Enterprise Linux CoreOS 9.6.20251021-0 (Plow) DockerVersion: DockerAPIVersion: CadvisorVersion: CadvisorRevision:} Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.640353 5117 container_manager_linux.go:270] "Container manager verified user specified cgroup-root exists" cgroupRoot=[] Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.640400 5117 container_manager_linux.go:275] "Creating Container Manager object based on Node Config" nodeConfig={"NodeName":"crc","RuntimeCgroupsName":"/system.slice/crio.service","SystemCgroupsName":"/system.slice","KubeletCgroupsName":"","KubeletOOMScoreAdj":-999,"ContainerRuntime":"","CgroupsPerQOS":true,"CgroupRoot":"/","CgroupDriver":"systemd","KubeletRootDir":"/var/lib/kubelet","ProtectKernelDefaults":true,"KubeReservedCgroupName":"","SystemReservedCgroupName":"","ReservedSystemCPUs":{},"EnforceNodeAllocatable":{"pods":{}},"KubeReserved":null,"SystemReserved":{"cpu":"200m","ephemeral-storage":"350Mi","memory":"350Mi"},"HardEvictionThresholds":[{"Signal":"memory.available","Operator":"LessThan","Value":{"Quantity":"100Mi","Percentage":0},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.1},"GracePeriod":0,"MinReclaim":null},{"Signal":"nodefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.available","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.15},"GracePeriod":0,"MinReclaim":null},{"Signal":"imagefs.inodesFree","Operator":"LessThan","Value":{"Quantity":null,"Percentage":0.05},"GracePeriod":0,"MinReclaim":null}],"QOSReserved":{},"CPUManagerPolicy":"none","CPUManagerPolicyOptions":null,"TopologyManagerScope":"container","CPUManagerReconcilePeriod":10000000000,"MemoryManagerPolicy":"None","MemoryManagerReservedMemory":null,"PodPidsLimit":4096,"EnforceCPULimits":true,"CPUCFSQuotaPeriod":100000000,"TopologyManagerPolicy":"none","TopologyManagerPolicyOptions":null,"CgroupVersion":2} Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.640661 5117 topology_manager.go:138] "Creating topology manager with none policy" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.640686 5117 container_manager_linux.go:306] "Creating device plugin manager" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.640709 5117 manager.go:141] "Creating Device Plugin manager" path="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.640982 5117 server.go:72] "Creating device plugin registration server" version="v1beta1" socket="/var/lib/kubelet/device-plugins/kubelet.sock" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.641427 5117 state_mem.go:36] "Initialized new in-memory state store" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.641630 5117 server.go:1267] "Using root directory" path="/var/lib/kubelet" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.642604 5117 kubelet.go:491] "Attempting to sync node with API server" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.642645 5117 kubelet.go:386] "Adding static pod path" path="/etc/kubernetes/manifests" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.642667 5117 file.go:69] "Watching path" path="/etc/kubernetes/manifests" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.642682 5117 kubelet.go:397] "Adding apiserver pod source" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.642706 5117 apiserver.go:42] "Waiting for node sync before watching apiserver pods" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.655461 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.656748 5117 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.656821 5117 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.660382 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.661245 5117 state_checkpoint.go:81] "State checkpoint: restored pod resource state from checkpoint" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.661272 5117 state_mem.go:40] "Initialized new in-memory state store for pod resource information tracking" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.665067 5117 kuberuntime_manager.go:279] "Container runtime initialized" containerRuntime="cri-o" version="1.33.5-3.rhaos4.20.gitd0ea985.el9" apiVersion="v1" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.665587 5117 certificate_store.go:147] "Loading cert/key pair from a file" filePath="/var/lib/kubelet/pki/kubelet-server-current.pem" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666077 5117 kubelet.go:953] "Not starting ClusterTrustBundle informer because we are in static kubelet mode or the ClusterTrustBundleProjection featuregate is disabled" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666558 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/portworx-volume" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666638 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/empty-dir" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666689 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/git-repo" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666743 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/host-path" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666797 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/nfs" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666842 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/secret" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666887 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/iscsi" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666940 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/downward-api" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.666990 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/fc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.667051 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/configmap" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.667174 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/projected" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.667360 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/local-volume" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.667630 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/csi" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.667728 5117 plugins.go:616] "Loaded volume plugin" pluginName="kubernetes.io/image" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.668772 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.688581 5117 watchdog_linux.go:99] "Systemd watchdog is not enabled" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.688704 5117 server.go:1295] "Started kubelet" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.689433 5117 ratelimit.go:55] "Setting rate limiting for endpoint" service="podresources" qps=100 burstTokens=10 Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.689604 5117 server_v1.go:47] "podresources" method="list" useActivePods=true Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.689578 5117 server.go:180] "Starting to listen" address="0.0.0.0" port=10250 Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.690221 5117 server.go:255] "Starting to serve the podresources API" endpoint="unix:/var/lib/kubelet/pod-resources/kubelet.sock" Jan 23 08:53:08 crc systemd[1]: Started Kubernetes Kubelet. Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.691419 5117 server.go:317] "Adding debug handlers to kubelet server" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.691602 5117 certificate_manager.go:422] "Certificate rotation is enabled" logger="kubernetes.io/kubelet-serving" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.694055 5117 fs_resource_analyzer.go:67] "Starting FS ResourceAnalyzer" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.695029 5117 volume_manager.go:295] "The desired_state_of_world populator starts" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.695067 5117 volume_manager.go:297] "Starting Kubelet Volume Manager" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.694929 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.695556 5117 desired_state_of_world_populator.go:150] "Desired state populator starts to run" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.695882 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d502e5d305773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.688639859 +0000 UTC m=+0.444764885,LastTimestamp:2026-01-23 08:53:08.688639859 +0000 UTC m=+0.444764885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.697286 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.697007 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="200ms" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.697702 5117 factory.go:55] Registering systemd factory Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.697890 5117 factory.go:223] Registration of the systemd container factory successfully Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.698404 5117 factory.go:153] Registering CRI-O factory Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.698439 5117 factory.go:223] Registration of the crio container factory successfully Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.698535 5117 factory.go:221] Registration of the containerd container factory failed: unable to create containerd client: containerd: cannot unix dial containerd api service: dial unix /run/containerd/containerd.sock: connect: no such file or directory Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.698565 5117 factory.go:103] Registering Raw factory Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.698581 5117 manager.go:1196] Started watching for new ooms in manager Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.705127 5117 manager.go:319] Starting recovery of all containers Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.741065 5117 manager.go:324] Recovery completed Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.743807 5117 watcher.go:152] Failed to watch directory "/sys/fs/cgroup/system.slice/ocp-mco-sshkey.service": inotify_add_watch /sys/fs/cgroup/system.slice/ocp-mco-sshkey.service: no such file or directory Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.748493 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.748653 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.748730 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.748797 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.748861 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.748925 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749000 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749065 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749192 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749272 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749408 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" volumeName="kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749506 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749728 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.749799 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.750508 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.750589 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.750650 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.750709 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.750766 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751636 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751669 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751690 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751717 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751729 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751740 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751751 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751777 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751817 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751834 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751845 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751882 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751894 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751968 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.751992 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752005 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752036 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752048 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752061 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752073 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752085 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752097 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752162 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752176 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752204 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752217 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752230 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752242 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752256 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752269 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752280 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752293 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752402 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752419 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="736c54fe-349c-4bb9-870a-d1c1d1c03831" volumeName="kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752431 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752445 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752485 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752507 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752523 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752537 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752565 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752576 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752587 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752601 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752623 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752638 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752650 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752662 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752691 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752727 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752762 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752776 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752788 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752806 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752822 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752832 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752860 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752874 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752883 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752894 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752907 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752918 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752933 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.752971 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753072 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753086 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af41de71-79cf-4590-bbe9-9e8b848862cb" volumeName="kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753096 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753107 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753119 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0effdbcf-dd7d-404d-9d48-77536d665a5d" volumeName="kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753212 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753229 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753241 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753271 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753285 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753298 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753339 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.753351 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754504 5117 reconstruct.go:144] "Volume is marked device as uncertain and added into the actual state" volumeName="kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" deviceMountPath="/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754529 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754540 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754564 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754575 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754585 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754596 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754607 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754617 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754628 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" volumeName="kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754668 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754701 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754716 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="ce090a97-9ab6-4c40-a719-64ff2acd9778" volumeName="kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754730 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754743 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754764 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754803 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754846 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754870 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754894 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754907 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.754994 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755033 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755047 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755060 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755072 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755084 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755128 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755194 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="0b638b8f4bb0070e40528db779baf6a2" volumeName="kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755207 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755220 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755231 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755243 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755256 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="31fa8943-81cc-4750-a0b7-0fa9ab5af883" volumeName="kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755270 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755293 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755306 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755319 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20c5c5b4bed930554494851fe3cb2b2a" volumeName="kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755331 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6077b63e-53a2-4f96-9d56-1ce0324e4913" volumeName="kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755343 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755354 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755365 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755376 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755399 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755458 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e093be35-bb62-4843-b2e8-094545761610" volumeName="kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755471 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755482 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755491 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755524 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755554 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="149b3c48-e17c-4a66-a835-d86dabf6ff13" volumeName="kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755578 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755599 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="567683bd-0efc-4f21-b076-e28559628404" volumeName="kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755612 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="01080b46-74f1-4191-8755-5152a57b3b25" volumeName="kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755626 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="16bdd140-dce1-464c-ab47-dd5798d1d256" volumeName="kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755664 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755679 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="94a6e063-3d1a-4d44-875d-185291448c31" volumeName="kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755692 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755703 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755721 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755818 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755840 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755854 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" volumeName="kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755869 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755882 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755931 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755945 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755958 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.755997 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" volumeName="kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756012 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756042 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756056 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f7e2c886-118e-43bb-bef1-c78134de392b" volumeName="kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756091 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="301e1965-1754-483d-b6cc-bfae7038bbca" volumeName="kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756125 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" volumeName="kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756161 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b4750666-1362-4001-abd0-6f89964cc621" volumeName="kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756181 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c5f2bfad-70f6-4185-a3d9-81ce12720767" volumeName="kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756221 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756238 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f559dfa3-3917-43a2-97f6-61ddfda10e93" volumeName="kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756250 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756288 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="18f80adb-c1c3-49ba-8ee4-932c851d3897" volumeName="kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756302 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756316 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756350 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756372 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756397 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756410 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756422 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756457 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756470 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756482 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756493 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a208c9c2-333b-4b4a-be0d-bc32ec38a821" volumeName="kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756504 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756528 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="09cfa50b-4138-4585-a53e-64dd3ab73335" volumeName="kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756544 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7df94c10-441d-4386-93a6-6730fb7bcde0" volumeName="kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756555 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756566 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc4541ce-7789-4670-bc75-5c2868e52ce0" volumeName="kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756577 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756601 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756612 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756624 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" volumeName="kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756666 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" volumeName="kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756699 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="af33e427-6803-48c2-a76a-dd9deb7cbf9a" volumeName="kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756711 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756723 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="593a3561-7760-45c5-8f91-5aaef7475d0f" volumeName="kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756735 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756746 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756779 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756814 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f0bc7fcb0822a2c13eb2d22cd8c0641" volumeName="kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756836 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a7a88189-c967-4640-879e-27665747f20c" volumeName="kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756848 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="34177974-8d82-49d2-a763-391d0df3bbd8" volumeName="kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756859 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" volumeName="kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756870 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7599e0b6-bddf-4def-b7f2-0b32206e8651" volumeName="kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756905 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756926 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d7e8f42f-dc0e-424b-bb56-5ec849834888" volumeName="kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756968 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.756991 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="17b87002-b798-480a-8e17-83053d698239" volumeName="kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757011 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="428b39f5-eb1c-4f65-b7a4-eeb6e84860cc" volumeName="kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757023 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="584e1f4a-8205-47d7-8efb-3afc6017c4c9" volumeName="kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757055 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6ee8fbd3-1f81-4666-96da-5afc70819f1a" volumeName="kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757067 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757079 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9e9b5059-1b3e-4067-a63d-2952cbe863af" volumeName="kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757092 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757114 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d19cb085-0c5b-4810-b654-ce7923221d90" volumeName="kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757126 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757179 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="5ebfebf6-3ecd-458e-943f-bb25b52e2718" volumeName="kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757224 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757238 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f863fff9-286a-45fa-b8f0-8a86994b8440" volumeName="kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757249 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="2325ffef-9d5b-447f-b00e-3efc429acefe" volumeName="kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757260 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="81e39f7b-62e4-4fc9-992a-6535ce127a02" volumeName="kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757362 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" volumeName="kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757461 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="92dfbade-90b6-4169-8c07-72cff7f2c82b" volumeName="kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757475 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="cc85e424-18b2-4924-920b-bd291a8c4b01" volumeName="kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757500 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" volumeName="kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757542 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="6edfcf45-925b-4eff-b940-95b6fc0b85d4" volumeName="kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757563 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="9f71a554-e414-4bc3-96d2-674060397afe" volumeName="kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757574 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" volumeName="kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757586 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7afa918d-be67-40a6-803c-d3b0ae99d815" volumeName="kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757597 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" volumeName="kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757606 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" volumeName="kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757642 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" volumeName="kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757665 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="f65c0ac1-8bca-454d-a2e6-e35cb418beac" volumeName="kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757676 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="42a11a02-47e1-488f-b270-2679d3298b0e" volumeName="kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757687 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="869851b9-7ffb-4af0-b166-1d8aa40a5f80" volumeName="kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757699 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="3a14caf222afb62aaabdc47808b6f944" volumeName="kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757710 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="c491984c-7d4b-44aa-8c1e-d7974424fa47" volumeName="kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757733 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="a555ff2e-0be6-46d5-897d-863bb92ae2b3" volumeName="kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757757 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="b605f283-6f2e-42da-a838-54421690f7d0" volumeName="kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757789 5117 reconstruct.go:130] "Volume is marked as uncertain and added into the actual state" pod="" podName="d565531a-ff86-4608-9d19-767de01ac31b" volumeName="kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" seLinuxMountContext="" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757814 5117 reconstruct.go:97] "Volume reconstruction finished" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.757823 5117 reconciler.go:26] "Reconciler: start to sync state" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.763642 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.767404 5117 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv4" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.768111 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.768202 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.768221 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.769294 5117 kubelet_network_linux.go:49] "Initialized iptables rules." protocol="IPv6" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.769385 5117 status_manager.go:230] "Starting to sync pod status with apiserver" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.769421 5117 watchdog_linux.go:127] "Systemd watchdog is not enabled or the interval is invalid, so health checking will not be started." Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.769437 5117 kubelet.go:2451] "Starting kubelet main sync loop" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.769482 5117 kubelet.go:2475] "Skipping pod synchronization" err="[container runtime status check may not have completed yet, PLEG is not healthy: pleg has yet to be successful]" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.770738 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.786816 5117 cpu_manager.go:222] "Starting CPU manager" policy="none" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.786846 5117 cpu_manager.go:223] "Reconciling" reconcilePeriod="10s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.786878 5117 state_mem.go:36] "Initialized new in-memory state store" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.790944 5117 policy_none.go:49] "None policy: Start" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.790971 5117 memory_manager.go:186] "Starting memorymanager" policy="None" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.790985 5117 state_mem.go:35] "Initializing new in-memory state store" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.795411 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.850595 5117 manager.go:341] "Starting Device Plugin manager" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.850783 5117 manager.go:517] "Failed to read data from checkpoint" err="checkpoint is not found" checkpoint="kubelet_internal_checkpoint" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.850799 5117 server.go:85] "Starting device plugin registration server" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.851277 5117 eviction_manager.go:189] "Eviction manager: starting control loop" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.851294 5117 container_log_manager.go:189] "Initializing container log rotate workers" workers=1 monitorPeriod="10s" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.851542 5117 plugin_watcher.go:51] "Plugin Watcher Start" path="/var/lib/kubelet/plugins_registry" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.851720 5117 plugin_manager.go:116] "The desired_state_of_world populator (plugin watcher) starts" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.851739 5117 plugin_manager.go:118] "Starting Kubelet Plugin Manager" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.863230 5117 eviction_manager.go:267] "eviction manager: failed to check if we have separate container filesystem. Ignoring." err="non-existent label \"crio-containers\"" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.863311 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.870093 5117 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-etcd/etcd-crc","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc"] Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.870455 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.871449 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.871503 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.871522 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.873250 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.873857 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.873904 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.874801 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.874856 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.874875 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.875469 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.875658 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.875721 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.877612 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.877676 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.877711 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.878821 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.878854 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.878868 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.879019 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.879078 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.879115 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.879805 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.879908 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.879968 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.880554 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.880587 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.880604 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.881235 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.881274 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.881286 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.881489 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.881712 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.881800 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882046 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882079 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882093 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882347 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882387 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882401 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882863 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.882901 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.883314 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.883346 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.883359 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.899724 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="400ms" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.937677 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.946931 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.952181 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.953301 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.953364 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.953379 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.953414 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.954177 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966500 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966547 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966601 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966633 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966661 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966767 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966823 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966857 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966896 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966928 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.966959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967045 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967114 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967175 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967215 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967233 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967259 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967277 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967306 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967396 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967407 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967438 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967538 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967588 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967613 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/20c5c5b4bed930554494851fe3cb2b2a-tmp-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967653 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-run-kubernetes\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-var-run-kubernetes\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967576 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/0b638b8f4bb0070e40528db779baf6a2-tmp\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967700 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-tmp-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967703 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: I0123 08:53:08.967858 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f0bc7fcb0822a2c13eb2d22cd8c0641-ca-trust-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.969056 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:08 crc kubenswrapper[5117]: E0123 08:53:08.996053 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:09 crc kubenswrapper[5117]: E0123 08:53:09.006778 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069313 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069389 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069514 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-cert-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069646 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069744 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-resource-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069745 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069760 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069788 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069814 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-auto-backup-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-etcd-auto-backup-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069815 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069857 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kube\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-etc-kube\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069864 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069893 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069902 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"static-pod-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-static-pod-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069923 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"usr-local-bin\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-usr-local-bin\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069913 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069936 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069955 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"data-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-data-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069964 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-log-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.069992 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070052 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/4e08c320b1e9e2405e6e0107bdf7eeb4-var-lib-kubelet\") pod \"kube-rbac-proxy-crio-crc\" (UID: \"4e08c320b1e9e2405e6e0107bdf7eeb4\") " pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070054 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070081 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-resource-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070096 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070172 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070207 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/9f0bc7fcb0822a2c13eb2d22cd8c0641-cert-dir\") pod \"kube-controller-manager-crc\" (UID: \"9f0bc7fcb0822a2c13eb2d22cd8c0641\") " pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070231 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/20c5c5b4bed930554494851fe3cb2b2a-resource-dir\") pod \"etcd-crc\" (UID: \"20c5c5b4bed930554494851fe3cb2b2a\") " pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070283 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070278 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.070382 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/0b638b8f4bb0070e40528db779baf6a2-cert-dir\") pod \"openshift-kube-scheduler-crc\" (UID: \"0b638b8f4bb0070e40528db779baf6a2\") " pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.154372 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.155683 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.155732 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.155748 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.155808 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:09 crc kubenswrapper[5117]: E0123 08:53:09.156470 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.238873 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.248286 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd/etcd-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.271050 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.292663 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.299019 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: E0123 08:53:09.300404 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="800ms" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.307983 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:09 crc kubenswrapper[5117]: W0123 08:53:09.339987 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0b638b8f4bb0070e40528db779baf6a2.slice/crio-0a1bd1bc45b57e4b64ab70789b7de6407c190ba06f34e2cdb9c9463f2f4156ca WatchSource:0}: Error finding container 0a1bd1bc45b57e4b64ab70789b7de6407c190ba06f34e2cdb9c9463f2f4156ca: Status 404 returned error can't find the container with id 0a1bd1bc45b57e4b64ab70789b7de6407c190ba06f34e2cdb9c9463f2f4156ca Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.557405 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.568058 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.568110 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.568123 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.568175 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:09 crc kubenswrapper[5117]: E0123 08:53:09.568697 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.670060 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 23 08:53:09 crc kubenswrapper[5117]: E0123 08:53:09.780785 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.794540 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f5b370037c3efc5a4cd8534353331e8f160cd1a9dcdb745c2c1f910d7811f8ff"} Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.803069 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"2aa1e1f6285bf94f127641b5b80d553d1010f7844a5af2d91130614319b971e7"} Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.807198 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"0a1bd1bc45b57e4b64ab70789b7de6407c190ba06f34e2cdb9c9463f2f4156ca"} Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.809433 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"e50661e7e049725c8df9d04d0ee0b816bf3ce62505ee480aa007588377c87210"} Jan 23 08:53:09 crc kubenswrapper[5117]: I0123 08:53:09.811880 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"6234355d18390e4567e5d74d340e90b3b8b6e448aa91f865b99809a738116247"} Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.037649 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.064438 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.101711 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="1.6s" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.112225 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.369019 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.371147 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.371205 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.371216 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.371259 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.371899 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.617894 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.619335 5117 certificate_manager.go:596] "Failed while requesting a signed certificate from the control plane" err="cannot create certificate signing request: Post \"https://api-int.crc.testing:6443/apis/certificates.k8s.io/v1/certificatesigningrequests\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="kubernetes.io/kube-apiserver-client-kubelet.UnhandledError" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.669820 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.821554 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b" exitCode=0 Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.821700 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b"} Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.822237 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.823625 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.823670 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.823684 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.823923 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.824489 5117 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="d8f4aea5fe77a64b87e80b61b975c69eee71967bc4201d8f0c39796b6f067385" exitCode=0 Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.824580 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"d8f4aea5fe77a64b87e80b61b975c69eee71967bc4201d8f0c39796b6f067385"} Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.824854 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.825545 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.825564 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.825586 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.825599 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.825780 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.827153 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.827181 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.827192 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.835330 5117 generic.go:358] "Generic (PLEG): container finished" podID="4e08c320b1e9e2405e6e0107bdf7eeb4" containerID="e7de6f11c0a4d297de34306d843f20b8f5ecfaa8a9834696edb6cc5a0a96b1d3" exitCode=0 Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.835430 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerDied","Data":"e7de6f11c0a4d297de34306d843f20b8f5ecfaa8a9834696edb6cc5a0a96b1d3"} Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.835472 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.842474 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.842549 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.842562 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.842847 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.845055 5117 generic.go:358] "Generic (PLEG): container finished" podID="0b638b8f4bb0070e40528db779baf6a2" containerID="2158a444f2e5ed189533f6a6fa74845441f50423e82571eb5347decc879e0adf" exitCode=0 Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.845140 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerDied","Data":"2158a444f2e5ed189533f6a6fa74845441f50423e82571eb5347decc879e0adf"} Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.845239 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.846242 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.846304 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.846318 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.846604 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.847443 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"a186274b94913495d46da21de467d9602aa5642fe66f59d1d0936ad0c257f299"} Jan 23 08:53:10 crc kubenswrapper[5117]: I0123 08:53:10.847486 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"250a0dfb6e83a36400a2c1fee2a02069a3ac81a98ee32800a2e3fc5895460f6a"} Jan 23 08:53:10 crc kubenswrapper[5117]: E0123 08:53:10.982478 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/default/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{crc.188d502e5d305773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.688639859 +0000 UTC m=+0.444764885,LastTimestamp:2026-01-23 08:53:08.688639859 +0000 UTC m=+0.444764885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.649338 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: Get \"https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csidrivers?limit=500&resourceVersion=0\": dial tcp 38.102.83.230:6443: connect: connection refused" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.670651 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": dial tcp 38.102.83.230:6443: connect: connection refused Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.704149 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="3.2s" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.872101 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.872190 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.872209 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.873944 5117 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="b1c8813cbc8ffbf1a3babf5e3578eff17e7aaaf05c002a410486e4a03e6b1cdf" exitCode=0 Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.874024 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"b1c8813cbc8ffbf1a3babf5e3578eff17e7aaaf05c002a410486e4a03e6b1cdf"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.874286 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.874896 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.874929 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.874939 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.875185 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.877378 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" event={"ID":"4e08c320b1e9e2405e6e0107bdf7eeb4","Type":"ContainerStarted","Data":"05c6e942f8fb1cecb26e15deafc78dd49746f38cfe0758e6d7919bb9fc3e6dea"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.877550 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.878429 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.878463 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.878477 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.878668 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.884420 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"f7f64e4f8c3cec7d99e275022f64107f71183b68fd53497362f1a58dd32dfa60"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.884468 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"0bfe30757bc73cbc8b856ff7d9d585995d1a3e47779114005149743244a6f380"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.884480 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" event={"ID":"0b638b8f4bb0070e40528db779baf6a2","Type":"ContainerStarted","Data":"d8794c97d1edfd02d317fd806304d3ee74adfa38c1c6c90062491b0c4a7f0467"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.884652 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.890279 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.890350 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.890365 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.890629 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.892047 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"7be69ecce38fd901dfc42bc01438a8261ad83a71307874878dee2cb2ef5b1b1a"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.892247 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.894060 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"44b22cd33c6dea6d7f7dba5fa840f09b94303c50382f756c820b3528d1f3e83a"} Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.894761 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.894801 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.894811 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.895019 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.972759 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.974236 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.974300 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.974315 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:11 crc kubenswrapper[5117]: I0123 08:53:11.974347 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:11 crc kubenswrapper[5117]: E0123 08:53:11.975030 5117 kubelet_node_status.go:110] "Unable to register node with API server" err="Post \"https://api-int.crc.testing:6443/api/v1/nodes\": dial tcp 38.102.83.230:6443: connect: connection refused" node="crc" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.900409 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"0abdc0ee01b545440b3eafb0f19a57f853bbd0c4fa83ae4924ca02fd2a792faf"} Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.900478 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7"} Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.900615 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.901416 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.901449 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.901460 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:12 crc kubenswrapper[5117]: E0123 08:53:12.901711 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.903807 5117 generic.go:358] "Generic (PLEG): container finished" podID="20c5c5b4bed930554494851fe3cb2b2a" containerID="ae5584925e81fff2a00f157d8af42ac7e7bb2fd8737463d7ba3be0e5a116625c" exitCode=0 Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.904007 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.904362 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.904506 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerDied","Data":"ae5584925e81fff2a00f157d8af42ac7e7bb2fd8737463d7ba3be0e5a116625c"} Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.904597 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.904842 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.905033 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.905541 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.905570 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.905581 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:12 crc kubenswrapper[5117]: E0123 08:53:12.905836 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906094 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906111 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906121 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:12 crc kubenswrapper[5117]: E0123 08:53:12.906403 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906440 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906478 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906490 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906488 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906541 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:12 crc kubenswrapper[5117]: I0123 08:53:12.906552 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:12 crc kubenswrapper[5117]: E0123 08:53:12.906967 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:12 crc kubenswrapper[5117]: E0123 08:53:12.907091 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.644458 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.651197 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.861056 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.909520 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"e41d19d165367b61fd737269feaec71d83b06650aa554fb0f344cb47660de49d"} Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.909563 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"5fa8b4bcf2a4167a8ad401e18de0801667f0bb8d2fef588b2571b86bdfdf2b1e"} Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.909575 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"f6e52f86f24ee729f545c9bad677bb4ddd8adbaf01e1ef51ad8d1e58846b7ad4"} Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.909584 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"4456a3a289dc8737df18d0c9f29a8c7368f3a40bb66355f63905a4c95a7c1ecb"} Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.909713 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.909808 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910161 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910179 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910310 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910340 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910394 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910625 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910671 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.910686 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:13 crc kubenswrapper[5117]: E0123 08:53:13.910843 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.911033 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.911052 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:13 crc kubenswrapper[5117]: E0123 08:53:13.911059 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:13 crc kubenswrapper[5117]: I0123 08:53:13.911063 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:13 crc kubenswrapper[5117]: E0123 08:53:13.911352 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.919303 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd/etcd-crc" event={"ID":"20c5c5b4bed930554494851fe3cb2b2a","Type":"ContainerStarted","Data":"025dc7d6404c15d6e7227ab68e50229d35a3b4db2d72550da05d7a9bf9595453"} Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.919379 5117 prober_manager.go:312] "Failed to trigger a manual run" probe="Readiness" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.919405 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.919448 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.920227 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.920877 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.920940 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.920954 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.920976 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.921016 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.921060 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:14 crc kubenswrapper[5117]: E0123 08:53:14.921783 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:14 crc kubenswrapper[5117]: E0123 08:53:14.923147 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.925632 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.925714 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.925733 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:14 crc kubenswrapper[5117]: E0123 08:53:14.926342 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:14 crc kubenswrapper[5117]: I0123 08:53:14.979421 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kube-apiserver-client-kubelet" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.069470 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.175903 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.177563 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.177603 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.177614 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.177649 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.790974 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.920829 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.922417 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.922440 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.922498 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923240 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923276 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923302 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923319 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923240 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923379 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923392 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923285 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:15 crc kubenswrapper[5117]: I0123 08:53:15.923457 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:15 crc kubenswrapper[5117]: E0123 08:53:15.923674 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:15 crc kubenswrapper[5117]: E0123 08:53:15.923839 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:15 crc kubenswrapper[5117]: E0123 08:53:15.924225 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:16 crc kubenswrapper[5117]: I0123 08:53:16.864937 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:16 crc kubenswrapper[5117]: I0123 08:53:16.925577 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:16 crc kubenswrapper[5117]: I0123 08:53:16.926914 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:16 crc kubenswrapper[5117]: I0123 08:53:16.926983 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:16 crc kubenswrapper[5117]: I0123 08:53:16.927003 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:16 crc kubenswrapper[5117]: E0123 08:53:16.927556 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.679109 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-etcd/etcd-crc" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.679552 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.681512 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.681564 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.681582 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:17 crc kubenswrapper[5117]: E0123 08:53:17.682124 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.929162 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.930037 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.930097 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:17 crc kubenswrapper[5117]: I0123 08:53:17.930113 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:17 crc kubenswrapper[5117]: E0123 08:53:17.930623 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:18 crc kubenswrapper[5117]: I0123 08:53:18.069828 5117 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 23 08:53:18 crc kubenswrapper[5117]: I0123 08:53:18.069995 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 23 08:53:18 crc kubenswrapper[5117]: E0123 08:53:18.863645 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:53:22 crc kubenswrapper[5117]: I0123 08:53:22.410944 5117 trace.go:236] Trace[812929465]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:53:12.409) (total time: 10001ms): Jan 23 08:53:22 crc kubenswrapper[5117]: Trace[812929465]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10001ms (08:53:22.410) Jan 23 08:53:22 crc kubenswrapper[5117]: Trace[812929465]: [10.001764212s] [10.001764212s] END Jan 23 08:53:22 crc kubenswrapper[5117]: E0123 08:53:22.411012 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: Get \"https://api-int.crc.testing:6443/api/v1/nodes?fieldSelector=metadata.name%3Dcrc&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 08:53:22 crc kubenswrapper[5117]: I0123 08:53:22.455079 5117 trace.go:236] Trace[1875652817]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:53:12.452) (total time: 10002ms): Jan 23 08:53:22 crc kubenswrapper[5117]: Trace[1875652817]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (08:53:22.454) Jan 23 08:53:22 crc kubenswrapper[5117]: Trace[1875652817]: [10.002153706s] [10.002153706s] END Jan 23 08:53:22 crc kubenswrapper[5117]: E0123 08:53:22.455156 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: Get \"https://api-int.crc.testing:6443/apis/node.k8s.io/v1/runtimeclasses?limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 08:53:22 crc kubenswrapper[5117]: I0123 08:53:22.671983 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: Get "https://api-int.crc.testing:6443/apis/storage.k8s.io/v1/csinodes/crc?resourceVersion=0": net/http: TLS handshake timeout Jan 23 08:53:22 crc kubenswrapper[5117]: I0123 08:53:22.792351 5117 trace.go:236] Trace[730569909]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:53:12.790) (total time: 10002ms): Jan 23 08:53:22 crc kubenswrapper[5117]: Trace[730569909]: ---"Objects listed" error:Get "https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0": net/http: TLS handshake timeout 10002ms (08:53:22.792) Jan 23 08:53:22 crc kubenswrapper[5117]: Trace[730569909]: [10.002263125s] [10.002263125s] END Jan 23 08:53:22 crc kubenswrapper[5117]: E0123 08:53:22.792394 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: Get \"https://api-int.crc.testing:6443/api/v1/services?fieldSelector=spec.clusterIP%21%3DNone&limit=500&resourceVersion=0\": net/http: TLS handshake timeout" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 08:53:23 crc kubenswrapper[5117]: I0123 08:53:23.166384 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 08:53:23 crc kubenswrapper[5117]: I0123 08:53:23.166469 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 08:53:23 crc kubenswrapper[5117]: I0123 08:53:23.171073 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 403" start-of-body={"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 08:53:23 crc kubenswrapper[5117]: I0123 08:53:23.171169 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 403" Jan 23 08:53:23 crc kubenswrapper[5117]: I0123 08:53:23.870416 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver namespace/openshift-kube-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]log ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]etcd ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-filter ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-informers ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-controllers ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/crd-informer-synced ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-system-namespaces-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 23 08:53:23 crc kubenswrapper[5117]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 23 08:53:23 crc kubenswrapper[5117]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/bootstrap-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/start-kube-aggregator-informers ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-registration-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-discovery-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]autoregister-completion ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapi-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 23 08:53:23 crc kubenswrapper[5117]: livez check failed Jan 23 08:53:23 crc kubenswrapper[5117]: I0123 08:53:23.870551 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.398104 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-etcd/etcd-crc" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.398546 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.400814 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.400900 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.400914 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:24 crc kubenswrapper[5117]: E0123 08:53:24.401466 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.423155 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-etcd/etcd-crc" Jan 23 08:53:24 crc kubenswrapper[5117]: E0123 08:53:24.905449 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": context deadline exceeded" interval="6.4s" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.947659 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.948511 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.948572 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.948593 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:24 crc kubenswrapper[5117]: E0123 08:53:24.949106 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:24 crc kubenswrapper[5117]: I0123 08:53:24.960846 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-etcd/etcd-crc" Jan 23 08:53:25 crc kubenswrapper[5117]: E0123 08:53:25.835246 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 08:53:25 crc kubenswrapper[5117]: I0123 08:53:25.949856 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:25 crc kubenswrapper[5117]: I0123 08:53:25.950447 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:25 crc kubenswrapper[5117]: I0123 08:53:25.950480 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:25 crc kubenswrapper[5117]: I0123 08:53:25.950491 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:25 crc kubenswrapper[5117]: E0123 08:53:25.950888 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:27 crc kubenswrapper[5117]: I0123 08:53:27.934772 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:27 crc kubenswrapper[5117]: I0123 08:53:27.935102 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:27 crc kubenswrapper[5117]: I0123 08:53:27.936211 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:27 crc kubenswrapper[5117]: I0123 08:53:27.936275 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:27 crc kubenswrapper[5117]: I0123 08:53:27.936293 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:27 crc kubenswrapper[5117]: E0123 08:53:27.936747 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.069976 5117 patch_prober.go:28] interesting pod/kube-controller-manager-crc container/cluster-policy-controller namespace/openshift-kube-controller-manager: Startup probe status=failure output="Get \"https://localhost:10357/healthz\": context deadline exceeded" start-of-body= Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.070080 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podUID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerName="cluster-policy-controller" probeResult="failure" output="Get \"https://localhost:10357/healthz\": context deadline exceeded" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.175063 5117 trace.go:236] Trace[1424138675]: "Reflector ListAndWatch" name:k8s.io/client-go/informers/factory.go:160 (23-Jan-2026 08:53:17.486) (total time: 10688ms): Jan 23 08:53:28 crc kubenswrapper[5117]: Trace[1424138675]: ---"Objects listed" error:csidrivers.storage.k8s.io is forbidden: User "system:anonymous" cannot list resource "csidrivers" in API group "storage.k8s.io" at the cluster scope 10688ms (08:53:28.174) Jan 23 08:53:28 crc kubenswrapper[5117]: Trace[1424138675]: [10.688279613s] [10.688279613s] END Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.174992 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e5d305773 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:Starting,Message:Starting kubelet.,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.688639859 +0000 UTC m=+0.444764885,LastTimestamp:2026-01-23 08:53:08.688639859 +0000 UTC m=+0.444764885,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.175188 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.175781 5117 reflector.go:430] "Caches populated" logger="kubernetes.io/kube-apiserver-client-kubelet" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.177455 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.178095 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.180377 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.181647 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.186544 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e675945ac default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeAllocatableEnforced,Message:Updated Node Allocatable limit across pods,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.859094444 +0000 UTC m=+0.615219480,LastTimestamp:2026-01-23 08:53:08.859094444 +0000 UTC m=+0.615219480,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.191776 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.871478477 +0000 UTC m=+0.627603503,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.196196 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.871512345 +0000 UTC m=+0.627637371,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.202432 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61eebb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.871529145 +0000 UTC m=+0.627654171,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.208467 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.874842727 +0000 UTC m=+0.630967753,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.214001 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.874862286 +0000 UTC m=+0.630987312,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.218880 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61eebb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.874880995 +0000 UTC m=+0.631006021,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.223923 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.87556295 +0000 UTC m=+0.631687976,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.228740 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.875679825 +0000 UTC m=+0.631804851,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.233712 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61eebb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.875735832 +0000 UTC m=+0.631860858,Count:4,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.238601 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.878843815 +0000 UTC m=+0.634968841,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.244876 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.878861234 +0000 UTC m=+0.634986260,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.249963 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61eebb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.878877153 +0000 UTC m=+0.635002179,Count:5,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.255006 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.879060884 +0000 UTC m=+0.635185910,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.260229 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.879105271 +0000 UTC m=+0.635230297,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.269085 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61eebb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.87912131 +0000 UTC m=+0.635246336,Count:6,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.276828 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.880567307 +0000 UTC m=+0.636692333,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.282000 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.880595586 +0000 UTC m=+0.636720612,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.284621 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61eebb5e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61eebb5e default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientPID,Message:Node crc status is now: NodeHasSufficientPID,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768226142 +0000 UTC m=+0.524351168,LastTimestamp:2026-01-23 08:53:08.880611045 +0000 UTC m=+0.636736071,Count:7,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.286840 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee1dfe\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee1dfe default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasSufficientMemory,Message:Node crc status is now: NodeHasSufficientMemory,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768185854 +0000 UTC m=+0.524310880,LastTimestamp:2026-01-23 08:53:08.881258152 +0000 UTC m=+0.637383178,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.289626 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"crc.188d502e61ee8558\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"default\"" event="&Event{ObjectMeta:{crc.188d502e61ee8558 default 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Node,Namespace:,Name:crc,UID:crc,APIVersion:,ResourceVersion:,FieldPath:,},Reason:NodeHasNoDiskPressure,Message:Node crc status is now: NodeHasNoDiskPressure,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:08.768212312 +0000 UTC m=+0.524337338,LastTimestamp:2026-01-23 08:53:08.881281031 +0000 UTC m=+0.637406057,Count:8,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.292568 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d502e813726f7 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.293065975 +0000 UTC m=+1.049191001,LastTimestamp:2026-01-23 08:53:09.293065975 +0000 UTC m=+1.049191001,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.295941 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502e813eeaf3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.293574899 +0000 UTC m=+1.049699935,LastTimestamp:2026-01-23 08:53:09.293574899 +0000 UTC m=+1.049699935,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.301044 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502e819264b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.299045553 +0000 UTC m=+1.055170579,LastTimestamp:2026-01-23 08:53:09.299045553 +0000 UTC m=+1.055170579,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.306602 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502e83cc6a78 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.336402552 +0000 UTC m=+1.092527578,LastTimestamp:2026-01-23 08:53:09.336402552 +0000 UTC m=+1.092527578,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.311458 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502e84b7329e openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.351789214 +0000 UTC m=+1.107914250,LastTimestamp:2026-01-23 08:53:09.351789214 +0000 UTC m=+1.107914250,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.316101 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502eaa0e527b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Created,Message:Created container: wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.978255995 +0000 UTC m=+1.734381021,LastTimestamp:2026-01-23 08:53:09.978255995 +0000 UTC m=+1.734381021,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.320647 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d502eaa10a523 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.978408227 +0000 UTC m=+1.734533243,LastTimestamp:2026-01-23 08:53:09.978408227 +0000 UTC m=+1.734533243,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.326217 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502eaa1b5536 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.979108662 +0000 UTC m=+1.735233688,LastTimestamp:2026-01-23 08:53:09.979108662 +0000 UTC m=+1.735233688,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.331582 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502eaa1f5f1b openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Created,Message:Created container: kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.979373339 +0000 UTC m=+1.735498355,LastTimestamp:2026-01-23 08:53:09.979373339 +0000 UTC m=+1.735498355,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.336736 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502eaa22461d openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Created,Message:Created container: setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.979563549 +0000 UTC m=+1.735688585,LastTimestamp:2026-01-23 08:53:09.979563549 +0000 UTC m=+1.735688585,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.343668 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502eaae52b73 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{wait-for-host-port},},Reason:Started,Message:Started container wait-for-host-port,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.992336243 +0000 UTC m=+1.748461279,LastTimestamp:2026-01-23 08:53:09.992336243 +0000 UTC m=+1.748461279,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.348978 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502eaae58e6d openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager},},Reason:Started,Message:Started container kube-controller-manager,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.992361581 +0000 UTC m=+1.748486607,LastTimestamp:2026-01-23 08:53:09.992361581 +0000 UTC m=+1.748486607,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.354247 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d502eaaecf95b openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.992847707 +0000 UTC m=+1.748972733,LastTimestamp:2026-01-23 08:53:09.992847707 +0000 UTC m=+1.748972733,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.359487 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502eaaf83ed5 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.993586389 +0000 UTC m=+1.749711425,LastTimestamp:2026-01-23 08:53:09.993586389 +0000 UTC m=+1.749711425,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.365527 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502eab0b41a8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.994832296 +0000 UTC m=+1.750957322,LastTimestamp:2026-01-23 08:53:09.994832296 +0000 UTC m=+1.750957322,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.371542 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502eab1e1a56 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{setup},},Reason:Started,Message:Started container setup,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:09.996067414 +0000 UTC m=+1.752192440,LastTimestamp:2026-01-23 08:53:09.996067414 +0000 UTC m=+1.752192440,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.376913 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ec5ceb385 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Created,Message:Created container: cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.443848581 +0000 UTC m=+2.199973617,LastTimestamp:2026-01-23 08:53:10.443848581 +0000 UTC m=+2.199973617,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.383777 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ec6839c02 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Started,Message:Started container cluster-policy-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.455704578 +0000 UTC m=+2.211829604,LastTimestamp:2026-01-23 08:53:10.455704578 +0000 UTC m=+2.211829604,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.389760 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ec69fd49c openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.457554076 +0000 UTC m=+2.213679102,LastTimestamp:2026-01-23 08:53:10.457554076 +0000 UTC m=+2.213679102,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.395526 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502edc8bd134 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.825341236 +0000 UTC m=+2.581466262,LastTimestamp:2026-01-23 08:53:10.825341236 +0000 UTC m=+2.581466262,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.401081 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502edca80aaa openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.827190954 +0000 UTC m=+2.583315980,LastTimestamp:2026-01-23 08:53:10.827190954 +0000 UTC m=+2.583315980,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.407088 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d502eddae242e openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.844367918 +0000 UTC m=+2.600492944,LastTimestamp:2026-01-23 08:53:10.844367918 +0000 UTC m=+2.600492944,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.415458 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502ede1a2393 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.851445651 +0000 UTC m=+2.607570677,LastTimestamp:2026-01-23 08:53:10.851445651 +0000 UTC m=+2.607570677,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.420595 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ee5e93bd8 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Created,Message:Created container: kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:10.982458328 +0000 UTC m=+2.738583354,LastTimestamp:2026-01-23 08:53:10.982458328 +0000 UTC m=+2.738583354,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.426648 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ee7c61ff0 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-cert-syncer},},Reason:Started,Message:Started container kube-controller-manager-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.013711856 +0000 UTC m=+2.769836882,LastTimestamp:2026-01-23 08:53:11.013711856 +0000 UTC m=+2.769836882,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.452690 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ee7e0a875 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.015450741 +0000 UTC m=+2.771575787,LastTimestamp:2026-01-23 08:53:11.015450741 +0000 UTC m=+2.771575787,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.476803 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502eee27a543 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Created,Message:Created container: kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.120766275 +0000 UTC m=+2.876891301,LastTimestamp:2026-01-23 08:53:11.120766275 +0000 UTC m=+2.876891301,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.485165 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502eee386133 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Created,Message:Created container: kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.121862963 +0000 UTC m=+2.877987979,LastTimestamp:2026-01-23 08:53:11.121862963 +0000 UTC m=+2.877987979,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.497685 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502eeeea7bd1 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Created,Message:Created container: etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.133535185 +0000 UTC m=+2.889660211,LastTimestamp:2026-01-23 08:53:11.133535185 +0000 UTC m=+2.889660211,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.505399 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d502eef74e829 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Created,Message:Created container: kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.142606889 +0000 UTC m=+2.898731915,LastTimestamp:2026-01-23 08:53:11.142606889 +0000 UTC m=+2.898731915,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.511182 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502eef96d879 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Started,Message:Started container kube-apiserver,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.144831097 +0000 UTC m=+2.900956123,LastTimestamp:2026-01-23 08:53:11.144831097 +0000 UTC m=+2.900956123,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.517547 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502eefb0d0b1 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.146533041 +0000 UTC m=+2.902658067,LastTimestamp:2026-01-23 08:53:11.146533041 +0000 UTC m=+2.902658067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.526048 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502ef0bba55f openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler},},Reason:Started,Message:Started container kube-scheduler,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.164020063 +0000 UTC m=+2.920145109,LastTimestamp:2026-01-23 08:53:11.164020063 +0000 UTC m=+2.920145109,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.526466 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.532890 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502ef0def875 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.166335093 +0000 UTC m=+2.922460129,LastTimestamp:2026-01-23 08:53:11.166335093 +0000 UTC m=+2.922460129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.539290 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-machine-config-operator\"" event="&Event{ObjectMeta:{kube-rbac-proxy-crio-crc.188d502ef1336516 openshift-machine-config-operator 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-machine-config-operator,Name:kube-rbac-proxy-crio-crc,UID:4e08c320b1e9e2405e6e0107bdf7eeb4,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-rbac-proxy-crio},},Reason:Started,Message:Started container kube-rbac-proxy-crio,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.171867926 +0000 UTC m=+2.927992952,LastTimestamp:2026-01-23 08:53:11.171867926 +0000 UTC m=+2.927992952,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.545732 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502ef133a58a openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-ensure-env-vars},},Reason:Started,Message:Started container etcd-ensure-env-vars,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.171884426 +0000 UTC m=+2.928009452,LastTimestamp:2026-01-23 08:53:11.171884426 +0000 UTC m=+2.928009452,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.551646 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ef5d5f5d5 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Created,Message:Created container: kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.249630677 +0000 UTC m=+3.005755703,LastTimestamp:2026-01-23 08:53:11.249630677 +0000 UTC m=+3.005755703,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.559429 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d502ef7d988f4 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-controller-manager-recovery-controller},},Reason:Started,Message:Started container kube-controller-manager-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.28341938 +0000 UTC m=+3.039544406,LastTimestamp:2026-01-23 08:53:11.28341938 +0000 UTC m=+3.039544406,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.564673 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502efe3dbf0b openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Created,Message:Created container: kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.390650123 +0000 UTC m=+3.146775149,LastTimestamp:2026-01-23 08:53:11.390650123 +0000 UTC m=+3.146775149,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.570772 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502efeace3d9 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Created,Message:Created container: kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.397934041 +0000 UTC m=+3.154059067,LastTimestamp:2026-01-23 08:53:11.397934041 +0000 UTC m=+3.154059067,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.583193 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502eff3c5f44 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-cert-syncer},},Reason:Started,Message:Started container kube-scheduler-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.407337284 +0000 UTC m=+3.163462310,LastTimestamp:2026-01-23 08:53:11.407337284 +0000 UTC m=+3.163462310,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.588817 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502eff4fc393 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.408608147 +0000 UTC m=+3.164733173,LastTimestamp:2026-01-23 08:53:11.408608147 +0000 UTC m=+3.164733173,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.596050 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f007e044d openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-syncer},},Reason:Started,Message:Started container kube-apiserver-cert-syncer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.428416589 +0000 UTC m=+3.184541615,LastTimestamp:2026-01-23 08:53:11.428416589 +0000 UTC m=+3.184541615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.603181 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f00d3347c openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.433999484 +0000 UTC m=+3.190124520,LastTimestamp:2026-01-23 08:53:11.433999484 +0000 UTC m=+3.190124520,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.604997 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502f0f851477 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Created,Message:Created container: kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.680537719 +0000 UTC m=+3.436662745,LastTimestamp:2026-01-23 08:53:11.680537719 +0000 UTC m=+3.436662745,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.609433 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f0fb2e9ee openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Created,Message:Created container: kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.683541486 +0000 UTC m=+3.439666512,LastTimestamp:2026-01-23 08:53:11.683541486 +0000 UTC m=+3.439666512,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.615562 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-scheduler\"" event="&Event{ObjectMeta:{openshift-kube-scheduler-crc.188d502f115f25f6 openshift-kube-scheduler 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-scheduler,Name:openshift-kube-scheduler-crc,UID:0b638b8f4bb0070e40528db779baf6a2,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-scheduler-recovery-controller},},Reason:Started,Message:Started container kube-scheduler-recovery-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.711606262 +0000 UTC m=+3.467731288,LastTimestamp:2026-01-23 08:53:11.711606262 +0000 UTC m=+3.467731288,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.621751 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f11bf7b07 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-cert-regeneration-controller},},Reason:Started,Message:Started container kube-apiserver-cert-regeneration-controller,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.717919495 +0000 UTC m=+3.474044521,LastTimestamp:2026-01-23 08:53:11.717919495 +0000 UTC m=+3.474044521,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.627659 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f11e19f9b openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.720157083 +0000 UTC m=+3.476282129,LastTimestamp:2026-01-23 08:53:11.720157083 +0000 UTC m=+3.476282129,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.634102 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f1b582e46 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.878921798 +0000 UTC m=+3.635046824,LastTimestamp:2026-01-23 08:53:11.878921798 +0000 UTC m=+3.635046824,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.640959 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f210330a0 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Created,Message:Created container: kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:11.974015136 +0000 UTC m=+3.730140162,LastTimestamp:2026-01-23 08:53:11.974015136 +0000 UTC m=+3.730140162,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.650907 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f234a9faf openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-insecure-readyz},},Reason:Started,Message:Started container kube-apiserver-insecure-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.012251055 +0000 UTC m=+3.768376101,LastTimestamp:2026-01-23 08:53:12.012251055 +0000 UTC m=+3.768376101,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.657820 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f23cdcc33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.020847667 +0000 UTC m=+3.776972693,LastTimestamp:2026-01-23 08:53:12.020847667 +0000 UTC m=+3.776972693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.668549 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f2dcf0b7f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Created,Message:Created container: etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.188701567 +0000 UTC m=+3.944826593,LastTimestamp:2026-01-23 08:53:12.188701567 +0000 UTC m=+3.944826593,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.674313 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.678359 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f304239e8 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.initContainers{etcd-resources-copy},},Reason:Started,Message:Started container etcd-resources-copy,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.22980452 +0000 UTC m=+3.985929546,LastTimestamp:2026-01-23 08:53:12.22980452 +0000 UTC m=+3.985929546,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.680046 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f3665a385 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.332788613 +0000 UTC m=+4.088913639,LastTimestamp:2026-01-23 08:53:12.332788613 +0000 UTC m=+4.088913639,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.683271 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f3700b936 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.342952246 +0000 UTC m=+4.099077292,LastTimestamp:2026-01-23 08:53:12.342952246 +0000 UTC m=+4.099077292,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.686089 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f58b5bfc6 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.90846407 +0000 UTC m=+4.664589096,LastTimestamp:2026-01-23 08:53:12.90846407 +0000 UTC m=+4.664589096,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.692513 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f665fb4a5 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Created,Message:Created container: etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.137706149 +0000 UTC m=+4.893831195,LastTimestamp:2026-01-23 08:53:13.137706149 +0000 UTC m=+4.893831195,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.693083 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.697369 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f6710b112 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcdctl},},Reason:Started,Message:Started container etcdctl,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.149305106 +0000 UTC m=+4.905430132,LastTimestamp:2026-01-23 08:53:13.149305106 +0000 UTC m=+4.905430132,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.699878 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f6723a87e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.150548094 +0000 UTC m=+4.906673120,LastTimestamp:2026-01-23 08:53:13.150548094 +0000 UTC m=+4.906673120,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.704645 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f7475e2eb openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Created,Message:Created container: etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.374040811 +0000 UTC m=+5.130165837,LastTimestamp:2026-01-23 08:53:13.374040811 +0000 UTC m=+5.130165837,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.710553 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38732->192.168.126.11:17697: read: connection reset by peer" start-of-body= Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.710728 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38732->192.168.126.11:17697: read: connection reset by peer" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.711202 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f754124cf openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd},},Reason:Started,Message:Started container etcd,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.387361487 +0000 UTC m=+5.143486513,LastTimestamp:2026-01-23 08:53:13.387361487 +0000 UTC m=+5.143486513,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.719343 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f7552f360 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.38852848 +0000 UTC m=+5.144653506,LastTimestamp:2026-01-23 08:53:13.38852848 +0000 UTC m=+5.144653506,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.724785 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f840b2c0f openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Created,Message:Created container: etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.635482639 +0000 UTC m=+5.391607655,LastTimestamp:2026-01-23 08:53:13.635482639 +0000 UTC m=+5.391607655,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.730727 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f84c91cdd openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-metrics},},Reason:Started,Message:Started container etcd-metrics,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.647930589 +0000 UTC m=+5.404055615,LastTimestamp:2026-01-23 08:53:13.647930589 +0000 UTC m=+5.404055615,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.752441 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f84f1fe03 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.650609667 +0000 UTC m=+5.406734693,LastTimestamp:2026-01-23 08:53:13.650609667 +0000 UTC m=+5.406734693,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.757963 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f93320a4e openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Created,Message:Created container: etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.889688142 +0000 UTC m=+5.645813168,LastTimestamp:2026-01-23 08:53:13.889688142 +0000 UTC m=+5.645813168,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.763553 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f93f25755 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-readyz},},Reason:Started,Message:Started container etcd-readyz,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.902290773 +0000 UTC m=+5.658415799,LastTimestamp:2026-01-23 08:53:13.902290773 +0000 UTC m=+5.658415799,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.770282 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502f940cb049 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Pulled,Message:Container image \"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:13.904017481 +0000 UTC m=+5.660142497,LastTimestamp:2026-01-23 08:53:13.904017481 +0000 UTC m=+5.660142497,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.777207 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502fa2f14634 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Created,Message:Created container: etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:14.153879092 +0000 UTC m=+5.910004118,LastTimestamp:2026-01-23 08:53:14.153879092 +0000 UTC m=+5.910004118,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.783926 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-etcd\"" event="&Event{ObjectMeta:{etcd-crc.188d502fa3b796b3 openshift-etcd 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-etcd,Name:etcd-crc,UID:20c5c5b4bed930554494851fe3cb2b2a,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{etcd-rev},},Reason:Started,Message:Started container etcd-rev,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:14.166875827 +0000 UTC m=+5.923000853,LastTimestamp:2026-01-23 08:53:14.166875827 +0000 UTC m=+5.923000853,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.792512 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-controller-manager-crc.188d50308c5bc4d1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 23 08:53:28 crc kubenswrapper[5117]: body: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:18.069949649 +0000 UTC m=+9.826074695,LastTimestamp:2026-01-23 08:53:18.069949649 +0000 UTC m=+9.826074695,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.798960 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d50308c5db730 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:18.070077232 +0000 UTC m=+9.826202258,LastTimestamp:2026-01-23 08:53:18.070077232 +0000 UTC m=+9.826202258,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.804712 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188d5031bc21fd46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 23 08:53:28 crc kubenswrapper[5117]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 08:53:28 crc kubenswrapper[5117]: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:23.166436678 +0000 UTC m=+14.922561704,LastTimestamp:2026-01-23 08:53:23.166436678 +0000 UTC m=+14.922561704,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.811525 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d5031bc22dbc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:23.16649364 +0000 UTC m=+14.922618666,LastTimestamp:2026-01-23 08:53:23.16649364 +0000 UTC m=+14.922618666,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.817679 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d5031bc21fd46\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188d5031bc21fd46 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 403 Jan 23 08:53:28 crc kubenswrapper[5117]: body: {"kind":"Status","apiVersion":"v1","metadata":{},"status":"Failure","message":"forbidden: User \"system:anonymous\" cannot get path \"/livez\"","reason":"Forbidden","details":{},"code":403} Jan 23 08:53:28 crc kubenswrapper[5117]: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:23.166436678 +0000 UTC m=+14.922561704,LastTimestamp:2026-01-23 08:53:23.171146997 +0000 UTC m=+14.927272023,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.823776 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d5031bc22dbc8\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d5031bc22dbc8 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 403,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:23.16649364 +0000 UTC m=+14.922618666,LastTimestamp:2026-01-23 08:53:23.171191128 +0000 UTC m=+14.927316154,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.829472 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188d5031e6195621 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:ProbeError,Message:Startup probe error: HTTP probe failed with statuscode: 500 Jan 23 08:53:28 crc kubenswrapper[5117]: body: [+]ping ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]log ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]etcd ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-api-request-count-filter ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-startkubeinformers ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-openshift-apiserver-reachable ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-oauth-apiserver-reachable ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-consumer ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-filter ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-informers ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-apiextensions-controllers ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/crd-informer-synced ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-system-namespaces-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-cluster-authentication-info-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-kube-apiserver-identity-lease-garbage-collector ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-legacy-token-tracking-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-service-ip-repair-controllers ok Jan 23 08:53:28 crc kubenswrapper[5117]: [-]poststarthook/rbac/bootstrap-roles failed: reason withheld Jan 23 08:53:28 crc kubenswrapper[5117]: [-]poststarthook/scheduling/bootstrap-system-priority-classes failed: reason withheld Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/priority-and-fairness-config-producer ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/bootstrap-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-kubernetes-service-cidr-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/aggregator-reload-proxy-client-cert ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/start-kube-aggregator-informers ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-local-available-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-status-remote-available-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-registration-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-wait-for-first-sync ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-discovery-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/kube-apiserver-autoregistration ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]autoregister-completion ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapi-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: [+]poststarthook/apiservice-openapiv3-controller ok Jan 23 08:53:28 crc kubenswrapper[5117]: livez check failed Jan 23 08:53:28 crc kubenswrapper[5117]: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:23.870512673 +0000 UTC m=+15.626637699,LastTimestamp:2026-01-23 08:53:23.870512673 +0000 UTC m=+15.626637699,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.834801 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d5031e61a60db openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver},},Reason:Unhealthy,Message:Startup probe failed: HTTP probe failed with statuscode: 500,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:23.870580955 +0000 UTC m=+15.626705991,LastTimestamp:2026-01-23 08:53:23.870580955 +0000 UTC m=+15.626705991,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.841868 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188d50308c5bc4d1\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-controller-manager-crc.188d50308c5bc4d1 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:ProbeError,Message:Startup probe error: Get "https://localhost:10357/healthz": context deadline exceeded Jan 23 08:53:28 crc kubenswrapper[5117]: body: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:18.069949649 +0000 UTC m=+9.826074695,LastTimestamp:2026-01-23 08:53:28.070046541 +0000 UTC m=+19.826171567,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.848727 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-controller-manager-crc.188d50308c5db730\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-controller-manager\"" event="&Event{ObjectMeta:{kube-controller-manager-crc.188d50308c5db730 openshift-kube-controller-manager 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-controller-manager,Name:kube-controller-manager-crc,UID:9f0bc7fcb0822a2c13eb2d22cd8c0641,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{cluster-policy-controller},},Reason:Unhealthy,Message:Startup probe failed: Get \"https://localhost:10357/healthz\": context deadline exceeded,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:18.070077232 +0000 UTC m=+9.826202258,LastTimestamp:2026-01-23 08:53:28.070111263 +0000 UTC m=+19.826236289,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.854736 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188d503306979c7f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": read tcp 192.168.126.11:38732->192.168.126.11:17697: read: connection reset by peer Jan 23 08:53:28 crc kubenswrapper[5117]: body: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:28.710626431 +0000 UTC m=+20.466751457,LastTimestamp:2026-01-23 08:53:28.710626431 +0000 UTC m=+20.466751457,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.860909 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50330699c0a3 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": read tcp 192.168.126.11:38732->192.168.126.11:17697: read: connection reset by peer,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:28.710766755 +0000 UTC m=+20.466891781,LastTimestamp:2026-01-23 08:53:28.710766755 +0000 UTC m=+20.466891781,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.864502 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.867322 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.867618 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.868073 5117 patch_prober.go:28] interesting pod/kube-apiserver-crc container/kube-apiserver-check-endpoints namespace/openshift-kube-apiserver: Readiness probe status=failure output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" start-of-body= Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.868176 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" probeResult="failure" output="Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.868444 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.868557 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.868685 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.869256 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.872696 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.873580 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event=< Jan 23 08:53:28 crc kubenswrapper[5117]: &Event{ObjectMeta:{kube-apiserver-crc.188d50330ffb639f openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:ProbeError,Message:Readiness probe error: Get "https://192.168.126.11:17697/healthz": dial tcp 192.168.126.11:17697: connect: connection refused Jan 23 08:53:28 crc kubenswrapper[5117]: body: Jan 23 08:53:28 crc kubenswrapper[5117]: ,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:28.868160415 +0000 UTC m=+20.624285441,LastTimestamp:2026-01-23 08:53:28.868160415 +0000 UTC m=+20.624285441,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,} Jan 23 08:53:28 crc kubenswrapper[5117]: > Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.878939 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50330ffbf810 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Unhealthy,Message:Readiness probe failed: Get \"https://192.168.126.11:17697/healthz\": dial tcp 192.168.126.11:17697: connect: connection refused,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:28.868198416 +0000 UTC m=+20.624323442,LastTimestamp:2026-01-23 08:53:28.868198416 +0000 UTC m=+20.624323442,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.960755 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.962554 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="0abdc0ee01b545440b3eafb0f19a57f853bbd0c4fa83ae4924ca02fd2a792faf" exitCode=255 Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.962616 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"0abdc0ee01b545440b3eafb0f19a57f853bbd0c4fa83ae4924ca02fd2a792faf"} Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.962803 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.963372 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.963399 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.963409 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.963706 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:28 crc kubenswrapper[5117]: I0123 08:53:28.963983 5117 scope.go:117] "RemoveContainer" containerID="0abdc0ee01b545440b3eafb0f19a57f853bbd0c4fa83ae4924ca02fd2a792faf" Jan 23 08:53:28 crc kubenswrapper[5117]: E0123 08:53:28.991798 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d502f23cdcc33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f23cdcc33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.020847667 +0000 UTC m=+3.776972693,LastTimestamp:2026-01-23 08:53:28.965913296 +0000 UTC m=+20.722038322,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:29 crc kubenswrapper[5117]: E0123 08:53:29.196977 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d502f3665a385\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f3665a385 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.332788613 +0000 UTC m=+4.088913639,LastTimestamp:2026-01-23 08:53:29.190721607 +0000 UTC m=+20.946846633,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:29 crc kubenswrapper[5117]: E0123 08:53:29.209500 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d502f3700b936\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f3700b936 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.342952246 +0000 UTC m=+4.099077292,LastTimestamp:2026-01-23 08:53:29.203459005 +0000 UTC m=+20.959584031,Count:2,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.677619 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.967763 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.969905 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741"} Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.970173 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.970870 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.970914 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:29 crc kubenswrapper[5117]: I0123 08:53:29.970928 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:29 crc kubenswrapper[5117]: E0123 08:53:29.971356 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.674072 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.974197 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.975052 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/0.log" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.977244 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" exitCode=255 Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.977333 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741"} Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.977419 5117 scope.go:117] "RemoveContainer" containerID="0abdc0ee01b545440b3eafb0f19a57f853bbd0c4fa83ae4924ca02fd2a792faf" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.977473 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.978115 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.978194 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.978211 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:30 crc kubenswrapper[5117]: E0123 08:53:30.978715 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:30 crc kubenswrapper[5117]: I0123 08:53:30.979102 5117 scope.go:117] "RemoveContainer" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" Jan 23 08:53:30 crc kubenswrapper[5117]: E0123 08:53:30.979370 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:53:30 crc kubenswrapper[5117]: E0123 08:53:30.984966 5117 event.go:359] "Server rejected event (will not retry!)" err="events is forbidden: User \"system:anonymous\" cannot create resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,Count:1,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:31 crc kubenswrapper[5117]: E0123 08:53:31.313805 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.675286 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.981745 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.984376 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.985463 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.985518 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.985534 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:31 crc kubenswrapper[5117]: E0123 08:53:31.986051 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:31 crc kubenswrapper[5117]: I0123 08:53:31.986395 5117 scope.go:117] "RemoveContainer" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" Jan 23 08:53:31 crc kubenswrapper[5117]: E0123 08:53:31.986673 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:53:31 crc kubenswrapper[5117]: E0123 08:53:31.991906 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d50338dd1587e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:53:31.986630738 +0000 UTC m=+23.742755764,Count:2,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:32 crc kubenswrapper[5117]: I0123 08:53:32.674185 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:33 crc kubenswrapper[5117]: I0123 08:53:33.674979 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:34 crc kubenswrapper[5117]: I0123 08:53:34.578329 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:34 crc kubenswrapper[5117]: I0123 08:53:34.579637 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:34 crc kubenswrapper[5117]: I0123 08:53:34.579678 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:34 crc kubenswrapper[5117]: I0123 08:53:34.579688 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:34 crc kubenswrapper[5117]: I0123 08:53:34.579711 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:34 crc kubenswrapper[5117]: E0123 08:53:34.588548 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:53:34 crc kubenswrapper[5117]: I0123 08:53:34.674803 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.075958 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.076251 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.077208 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.077247 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.077258 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:35 crc kubenswrapper[5117]: E0123 08:53:35.077643 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.080949 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.675199 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:35 crc kubenswrapper[5117]: E0123 08:53:35.698697 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 08:53:35 crc kubenswrapper[5117]: E0123 08:53:35.742228 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.994989 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.995822 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.995894 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:35 crc kubenswrapper[5117]: I0123 08:53:35.995913 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:35 crc kubenswrapper[5117]: E0123 08:53:35.996462 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:36 crc kubenswrapper[5117]: I0123 08:53:36.675436 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:37 crc kubenswrapper[5117]: E0123 08:53:37.439797 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 08:53:37 crc kubenswrapper[5117]: I0123 08:53:37.674573 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:38 crc kubenswrapper[5117]: E0123 08:53:38.320455 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:53:38 crc kubenswrapper[5117]: I0123 08:53:38.674657 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:38 crc kubenswrapper[5117]: E0123 08:53:38.864993 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:53:39 crc kubenswrapper[5117]: E0123 08:53:39.546021 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.675270 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.971081 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.971859 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.973493 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.973576 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.973589 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:39 crc kubenswrapper[5117]: E0123 08:53:39.974163 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:39 crc kubenswrapper[5117]: I0123 08:53:39.974541 5117 scope.go:117] "RemoveContainer" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" Jan 23 08:53:39 crc kubenswrapper[5117]: E0123 08:53:39.974818 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:53:39 crc kubenswrapper[5117]: E0123 08:53:39.980107 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d50338dd1587e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:53:39.974778017 +0000 UTC m=+31.730903033,Count:3,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.251692 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.251964 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.252992 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.253033 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.253046 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:40 crc kubenswrapper[5117]: E0123 08:53:40.253389 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.253712 5117 scope.go:117] "RemoveContainer" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" Jan 23 08:53:40 crc kubenswrapper[5117]: E0123 08:53:40.253963 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 10s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:53:40 crc kubenswrapper[5117]: E0123 08:53:40.259074 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d50338dd1587e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:53:40.253929274 +0000 UTC m=+32.010054300,Count:4,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:40 crc kubenswrapper[5117]: I0123 08:53:40.674346 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:41 crc kubenswrapper[5117]: I0123 08:53:41.588961 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:41 crc kubenswrapper[5117]: I0123 08:53:41.590283 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:41 crc kubenswrapper[5117]: I0123 08:53:41.590350 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:41 crc kubenswrapper[5117]: I0123 08:53:41.590368 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:41 crc kubenswrapper[5117]: I0123 08:53:41.590407 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:41 crc kubenswrapper[5117]: E0123 08:53:41.603712 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:53:41 crc kubenswrapper[5117]: I0123 08:53:41.674973 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:42 crc kubenswrapper[5117]: I0123 08:53:42.674837 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:43 crc kubenswrapper[5117]: I0123 08:53:43.673957 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:44 crc kubenswrapper[5117]: I0123 08:53:44.675950 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:45 crc kubenswrapper[5117]: E0123 08:53:45.327244 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:53:45 crc kubenswrapper[5117]: I0123 08:53:45.676361 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:46 crc kubenswrapper[5117]: I0123 08:53:46.674526 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:47 crc kubenswrapper[5117]: I0123 08:53:47.674903 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:48 crc kubenswrapper[5117]: I0123 08:53:48.605896 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:48 crc kubenswrapper[5117]: I0123 08:53:48.607156 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:48 crc kubenswrapper[5117]: I0123 08:53:48.607189 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:48 crc kubenswrapper[5117]: I0123 08:53:48.607201 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:48 crc kubenswrapper[5117]: I0123 08:53:48.607225 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:48 crc kubenswrapper[5117]: E0123 08:53:48.616457 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:53:48 crc kubenswrapper[5117]: I0123 08:53:48.674290 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:48 crc kubenswrapper[5117]: E0123 08:53:48.865278 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:53:49 crc kubenswrapper[5117]: I0123 08:53:49.674858 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:50 crc kubenswrapper[5117]: I0123 08:53:50.675792 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:51 crc kubenswrapper[5117]: E0123 08:53:51.502507 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Service: services is forbidden: User \"system:anonymous\" cannot list resource \"services\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Service" Jan 23 08:53:51 crc kubenswrapper[5117]: I0123 08:53:51.674121 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:52 crc kubenswrapper[5117]: E0123 08:53:52.334663 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:53:52 crc kubenswrapper[5117]: I0123 08:53:52.675851 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:53 crc kubenswrapper[5117]: I0123 08:53:53.675872 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:54 crc kubenswrapper[5117]: I0123 08:53:54.674925 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:55 crc kubenswrapper[5117]: E0123 08:53:55.195696 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.Node: nodes \"crc\" is forbidden: User \"system:anonymous\" cannot list resource \"nodes\" in API group \"\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.Node" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.617217 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.618442 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.618525 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.618551 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.618592 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:53:55 crc kubenswrapper[5117]: E0123 08:53:55.632223 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.676308 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.770810 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.772212 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.772989 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.773119 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:55 crc kubenswrapper[5117]: E0123 08:53:55.773849 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:55 crc kubenswrapper[5117]: I0123 08:53:55.774417 5117 scope.go:117] "RemoveContainer" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" Jan 23 08:53:55 crc kubenswrapper[5117]: E0123 08:53:55.783627 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d502f23cdcc33\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f23cdcc33 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.020847667 +0000 UTC m=+3.776972693,LastTimestamp:2026-01-23 08:53:55.776305139 +0000 UTC m=+47.532430165,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:55 crc kubenswrapper[5117]: E0123 08:53:55.962081 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d502f3665a385\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f3665a385 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Created,Message:Created container: kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.332788613 +0000 UTC m=+4.088913639,LastTimestamp:2026-01-23 08:53:55.956224695 +0000 UTC m=+47.712349721,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:55 crc kubenswrapper[5117]: E0123 08:53:55.980017 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d502f3700b936\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d502f3700b936 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:Started,Message:Started container kube-apiserver-check-endpoints,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:12.342952246 +0000 UTC m=+4.099077292,LastTimestamp:2026-01-23 08:53:55.97216044 +0000 UTC m=+47.728285466,Count:3,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.054534 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.056452 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d"} Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.056648 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.057327 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.057453 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.057525 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:56 crc kubenswrapper[5117]: E0123 08:53:56.057895 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:56 crc kubenswrapper[5117]: I0123 08:53:56.675529 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:57 crc kubenswrapper[5117]: I0123 08:53:57.676948 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.064222 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.064919 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/1.log" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.066654 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d" exitCode=255 Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.066737 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d"} Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.066804 5117 scope.go:117] "RemoveContainer" containerID="c7bb96a65da50ca22dff8d0454d059881d18cd7ddd63c0ccae376ed6f99d8741" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.067030 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.067617 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.067662 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.067676 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:53:58 crc kubenswrapper[5117]: E0123 08:53:58.068019 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.068368 5117 scope.go:117] "RemoveContainer" containerID="64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d" Jan 23 08:53:58 crc kubenswrapper[5117]: E0123 08:53:58.068665 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:53:58 crc kubenswrapper[5117]: E0123 08:53:58.073761 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d50338dd1587e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:53:58.068624083 +0000 UTC m=+49.824749109,Count:5,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:53:58 crc kubenswrapper[5117]: E0123 08:53:58.446642 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.RuntimeClass: runtimeclasses.node.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"runtimeclasses\" in API group \"node.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.RuntimeClass" Jan 23 08:53:58 crc kubenswrapper[5117]: I0123 08:53:58.673462 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:53:58 crc kubenswrapper[5117]: E0123 08:53:58.865554 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:53:59 crc kubenswrapper[5117]: I0123 08:53:59.071788 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 23 08:53:59 crc kubenswrapper[5117]: E0123 08:53:59.345662 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:53:59 crc kubenswrapper[5117]: I0123 08:53:59.674834 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.251644 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.252004 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.253094 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.253172 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.253187 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:00 crc kubenswrapper[5117]: E0123 08:54:00.253606 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.254108 5117 scope.go:117] "RemoveContainer" containerID="64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d" Jan 23 08:54:00 crc kubenswrapper[5117]: E0123 08:54:00.254363 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:00 crc kubenswrapper[5117]: E0123 08:54:00.260272 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d50338dd1587e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:54:00.254331704 +0000 UTC m=+52.010456730,Count:6,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:54:00 crc kubenswrapper[5117]: I0123 08:54:00.674807 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:01 crc kubenswrapper[5117]: I0123 08:54:01.674645 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:02 crc kubenswrapper[5117]: I0123 08:54:02.633301 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:02 crc kubenswrapper[5117]: I0123 08:54:02.634223 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:02 crc kubenswrapper[5117]: I0123 08:54:02.634266 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:02 crc kubenswrapper[5117]: I0123 08:54:02.634285 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:02 crc kubenswrapper[5117]: I0123 08:54:02.634320 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:54:02 crc kubenswrapper[5117]: E0123 08:54:02.644592 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:54:02 crc kubenswrapper[5117]: I0123 08:54:02.675803 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:03 crc kubenswrapper[5117]: I0123 08:54:03.674075 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:03 crc kubenswrapper[5117]: I0123 08:54:03.916638 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:54:03 crc kubenswrapper[5117]: I0123 08:54:03.916868 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:03 crc kubenswrapper[5117]: I0123 08:54:03.917928 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:03 crc kubenswrapper[5117]: I0123 08:54:03.917989 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:03 crc kubenswrapper[5117]: I0123 08:54:03.918006 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:03 crc kubenswrapper[5117]: E0123 08:54:03.918656 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:04 crc kubenswrapper[5117]: E0123 08:54:04.081676 5117 reflector.go:200] "Failed to watch" err="failed to list *v1.CSIDriver: csidrivers.storage.k8s.io is forbidden: User \"system:anonymous\" cannot list resource \"csidrivers\" in API group \"storage.k8s.io\" at the cluster scope" logger="UnhandledError" reflector="k8s.io/client-go/informers/factory.go:160" type="*v1.CSIDriver" Jan 23 08:54:04 crc kubenswrapper[5117]: I0123 08:54:04.675157 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:05 crc kubenswrapper[5117]: I0123 08:54:05.674952 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.057488 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.057868 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.059075 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.059462 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.059536 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:06 crc kubenswrapper[5117]: E0123 08:54:06.059984 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.060331 5117 scope.go:117] "RemoveContainer" containerID="64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d" Jan 23 08:54:06 crc kubenswrapper[5117]: E0123 08:54:06.060618 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 20s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:06 crc kubenswrapper[5117]: E0123 08:54:06.066671 5117 event.go:359] "Server rejected event (will not retry!)" err="events \"kube-apiserver-crc.188d50338dd1587e\" is forbidden: User \"system:anonymous\" cannot patch resource \"events\" in API group \"\" in the namespace \"openshift-kube-apiserver\"" event="&Event{ObjectMeta:{kube-apiserver-crc.188d50338dd1587e openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-crc,UID:3a14caf222afb62aaabdc47808b6f944,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{kube-apiserver-check-endpoints},},Reason:BackOff,Message:Back-off restarting failed container kube-apiserver-check-endpoints in pod kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944),Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:53:30.97933427 +0000 UTC m=+22.735459296,LastTimestamp:2026-01-23 08:54:06.060578467 +0000 UTC m=+57.816703493,Count:7,Type:Warning,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:54:06 crc kubenswrapper[5117]: E0123 08:54:06.355232 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:54:06 crc kubenswrapper[5117]: I0123 08:54:06.673806 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:07 crc kubenswrapper[5117]: I0123 08:54:07.676939 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:08 crc kubenswrapper[5117]: I0123 08:54:08.676409 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:08 crc kubenswrapper[5117]: E0123 08:54:08.866465 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:54:09 crc kubenswrapper[5117]: I0123 08:54:09.645198 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:09 crc kubenswrapper[5117]: I0123 08:54:09.647177 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:09 crc kubenswrapper[5117]: I0123 08:54:09.647239 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:09 crc kubenswrapper[5117]: I0123 08:54:09.647257 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:09 crc kubenswrapper[5117]: I0123 08:54:09.647294 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:54:09 crc kubenswrapper[5117]: E0123 08:54:09.655821 5117 kubelet_node_status.go:116] "Unable to register node with API server, error getting existing node" err="nodes \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"nodes\" in API group \"\" at the cluster scope" node="crc" Jan 23 08:54:09 crc kubenswrapper[5117]: I0123 08:54:09.674045 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:10 crc kubenswrapper[5117]: I0123 08:54:10.676866 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:11 crc kubenswrapper[5117]: I0123 08:54:11.673779 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:12 crc kubenswrapper[5117]: I0123 08:54:12.673426 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:13 crc kubenswrapper[5117]: E0123 08:54:13.359836 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="leases.coordination.k8s.io \"crc\" is forbidden: User \"system:anonymous\" cannot get resource \"leases\" in API group \"coordination.k8s.io\" in the namespace \"kube-node-lease\"" interval="7s" Jan 23 08:54:13 crc kubenswrapper[5117]: I0123 08:54:13.674521 5117 csi_plugin.go:988] Failed to contact API server when waiting for CSINode publishing: csinodes.storage.k8s.io "crc" is forbidden: User "system:anonymous" cannot get resource "csinodes" in API group "storage.k8s.io" at the cluster scope Jan 23 08:54:14 crc kubenswrapper[5117]: I0123 08:54:14.379871 5117 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-l7t4l" Jan 23 08:54:14 crc kubenswrapper[5117]: I0123 08:54:14.385501 5117 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kube-apiserver-client-kubelet" csr="csr-l7t4l" Jan 23 08:54:14 crc kubenswrapper[5117]: I0123 08:54:14.449458 5117 reconstruct.go:205] "DevicePaths of reconstructed volumes updated" Jan 23 08:54:14 crc kubenswrapper[5117]: I0123 08:54:14.547068 5117 transport.go:147] "Certificate rotation detected, shutting down client connections to start using new credentials" Jan 23 08:54:15 crc kubenswrapper[5117]: I0123 08:54:15.387400 5117 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kube-apiserver-client-kubelet" expiration="2026-02-22 08:49:14 +0000 UTC" deadline="2026-02-15 16:08:21.983538375 +0000 UTC" Jan 23 08:54:15 crc kubenswrapper[5117]: I0123 08:54:15.387456 5117 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kube-apiserver-client-kubelet" sleep="559h14m6.596087545s" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.656336 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.657614 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.657656 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.657671 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.657778 5117 kubelet_node_status.go:78] "Attempting to register node" node="crc" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.668509 5117 kubelet_node_status.go:127] "Node was previously registered" node="crc" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.668840 5117 kubelet_node_status.go:81] "Successfully registered node" node="crc" Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.668870 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.672121 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.672201 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.672214 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.672234 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.672247 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:16Z","lastTransitionTime":"2026-01-23T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.686528 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.696447 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.696511 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.696524 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.696544 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.696556 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:16Z","lastTransitionTime":"2026-01-23T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.709838 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.719347 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.719389 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.719400 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.719415 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.719426 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:16Z","lastTransitionTime":"2026-01-23T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.732912 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.741846 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.741885 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.741896 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.741911 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:16 crc kubenswrapper[5117]: I0123 08:54:16.741920 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:16Z","lastTransitionTime":"2026-01-23T08:54:16Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.780850 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:16Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"},\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"}]}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.781015 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.781046 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.882075 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:16 crc kubenswrapper[5117]: E0123 08:54:16.982400 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.082864 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.183755 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.284424 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.385361 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.485827 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.586194 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.686670 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: I0123 08:54:17.770455 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:17 crc kubenswrapper[5117]: I0123 08:54:17.776704 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:17 crc kubenswrapper[5117]: I0123 08:54:17.776773 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:17 crc kubenswrapper[5117]: I0123 08:54:17.776848 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.777835 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:17 crc kubenswrapper[5117]: I0123 08:54:17.778384 5117 scope.go:117] "RemoveContainer" containerID="64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.787199 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.887716 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:17 crc kubenswrapper[5117]: E0123 08:54:17.988320 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.089168 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: I0123 08:54:18.123406 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 23 08:54:18 crc kubenswrapper[5117]: I0123 08:54:18.125767 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d"} Jan 23 08:54:18 crc kubenswrapper[5117]: I0123 08:54:18.125995 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:18 crc kubenswrapper[5117]: I0123 08:54:18.127093 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:18 crc kubenswrapper[5117]: I0123 08:54:18.127122 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:18 crc kubenswrapper[5117]: I0123 08:54:18.127155 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.127606 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.189625 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.289816 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.390654 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.491632 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.592232 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.693189 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.793555 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.867494 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.893953 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:18 crc kubenswrapper[5117]: E0123 08:54:18.994429 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.094817 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.129614 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.130276 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/2.log" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132062 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" exitCode=255 Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132152 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerDied","Data":"f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d"} Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132205 5117 scope.go:117] "RemoveContainer" containerID="64afdc8e4288bf265e3dafff7fca7ba20f5ed0905dde7c87cd8d303cac9c975d" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132403 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132942 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132977 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.132986 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.133591 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.133831 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.134039 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.195114 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: I0123 08:54:19.197338 5117 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.295241 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.396357 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.497478 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.597972 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.698698 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.799148 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:19 crc kubenswrapper[5117]: E0123 08:54:19.899895 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.000302 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.100714 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.136718 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.201844 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.251518 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.251753 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.253032 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.253092 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.253107 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.253844 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:20 crc kubenswrapper[5117]: I0123 08:54:20.254125 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.254402 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.302734 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.403199 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.503707 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.604442 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.704853 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.805236 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:20 crc kubenswrapper[5117]: E0123 08:54:20.905732 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.006786 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.107866 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.208772 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.309703 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.410525 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.510913 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.611228 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.711914 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.812556 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:21 crc kubenswrapper[5117]: E0123 08:54:21.912850 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.013298 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.113890 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.214540 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.314876 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.415598 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.515951 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.617006 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.717454 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.818160 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:22 crc kubenswrapper[5117]: E0123 08:54:22.918505 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.019614 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.120411 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.221219 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.322189 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.422338 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.523465 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.624192 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.725356 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.826023 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:23 crc kubenswrapper[5117]: E0123 08:54:23.926168 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.027256 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.128103 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.229190 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.330291 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.430821 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.531247 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.632185 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.732494 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.833424 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:24 crc kubenswrapper[5117]: E0123 08:54:24.934091 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.034211 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.135208 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.236200 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.337056 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.437489 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.537905 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.638761 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.739526 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.840087 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:25 crc kubenswrapper[5117]: E0123 08:54:25.940504 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.041173 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.141785 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.242874 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.343278 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.443746 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.544282 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.644883 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.745722 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.845935 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:26 crc kubenswrapper[5117]: E0123 08:54:26.946749 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.047508 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.084319 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.089242 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.089329 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.089356 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.089389 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.089417 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:27Z","lastTransitionTime":"2026-01-23T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.105716 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.109770 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.109845 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.109871 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.109902 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.109928 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:27Z","lastTransitionTime":"2026-01-23T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.124496 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.128429 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.128523 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.128554 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.128588 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.128612 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:27Z","lastTransitionTime":"2026-01-23T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.141031 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.144392 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.144459 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.144471 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.144488 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:27 crc kubenswrapper[5117]: I0123 08:54:27.144500 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:27Z","lastTransitionTime":"2026-01-23T08:54:27Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.154872 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:27Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.155045 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.155075 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.255952 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.356447 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.457189 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.557688 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.658333 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.758753 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.859391 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:27 crc kubenswrapper[5117]: E0123 08:54:27.960521 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.061064 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: I0123 08:54:28.126463 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:54:28 crc kubenswrapper[5117]: I0123 08:54:28.126684 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:28 crc kubenswrapper[5117]: I0123 08:54:28.127603 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:28 crc kubenswrapper[5117]: I0123 08:54:28.127647 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:28 crc kubenswrapper[5117]: I0123 08:54:28.127660 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.128082 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:28 crc kubenswrapper[5117]: I0123 08:54:28.128400 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.128638 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.161436 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.262064 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.362202 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.462602 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.563460 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.663811 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.764910 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.865578 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.868029 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:54:28 crc kubenswrapper[5117]: E0123 08:54:28.966704 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.066907 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.167221 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.267561 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.367841 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.468110 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.569264 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.669440 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.770412 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.870946 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:29 crc kubenswrapper[5117]: E0123 08:54:29.971367 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.072230 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.172932 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.273828 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.374193 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.474533 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.574759 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.675762 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.776313 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.877515 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:30 crc kubenswrapper[5117]: E0123 08:54:30.978193 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.078621 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.178783 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.279184 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.379663 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.480805 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.581713 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.682620 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.783534 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.883995 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:31 crc kubenswrapper[5117]: E0123 08:54:31.984516 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.085235 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.186169 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.287211 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.387896 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.488888 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.590064 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.690246 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: I0123 08:54:32.770104 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:32 crc kubenswrapper[5117]: I0123 08:54:32.771333 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:32 crc kubenswrapper[5117]: I0123 08:54:32.771413 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:32 crc kubenswrapper[5117]: I0123 08:54:32.771439 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.772345 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.791352 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.891787 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:32 crc kubenswrapper[5117]: E0123 08:54:32.992438 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.093096 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.193902 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.294505 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.395226 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.495404 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.595584 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.696204 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: I0123 08:54:33.770400 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:33 crc kubenswrapper[5117]: I0123 08:54:33.771526 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:33 crc kubenswrapper[5117]: I0123 08:54:33.771602 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:33 crc kubenswrapper[5117]: I0123 08:54:33.771624 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.772170 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.797194 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.898018 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:33 crc kubenswrapper[5117]: E0123 08:54:33.998328 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.099063 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.199982 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.301046 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.402023 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.502971 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.603381 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.704160 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.804683 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:34 crc kubenswrapper[5117]: E0123 08:54:34.904994 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.005501 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.106326 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.207421 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.308031 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.408207 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.509333 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.609782 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.710332 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.811043 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:35 crc kubenswrapper[5117]: E0123 08:54:35.911448 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.012491 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.113243 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.214329 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.314809 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.415637 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.515977 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.616604 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.716753 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.817638 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:36 crc kubenswrapper[5117]: E0123 08:54:36.918109 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.018469 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.121330 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.222183 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.323336 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.424469 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.520847 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.527563 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.527643 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.527657 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.527680 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.527698 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:37Z","lastTransitionTime":"2026-01-23T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.543376 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.548771 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.548833 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.548849 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.548872 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.548887 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:37Z","lastTransitionTime":"2026-01-23T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.564099 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.569422 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.569483 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.569501 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.569557 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.569577 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:37Z","lastTransitionTime":"2026-01-23T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.582402 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.586656 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.586701 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.586712 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.586730 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:37 crc kubenswrapper[5117]: I0123 08:54:37.586742 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:37Z","lastTransitionTime":"2026-01-23T08:54:37Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.598729 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"allocatable\\\":{\\\"cpu\\\":\\\"11800m\\\",\\\"ephemeral-storage\\\":\\\"76396645454\\\",\\\"memory\\\":\\\"32400460Ki\\\"},\\\"capacity\\\":{\\\"cpu\\\":\\\"12\\\",\\\"ephemeral-storage\\\":\\\"83293888Ki\\\",\\\"memory\\\":\\\"32861260Ki\\\"},\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient memory available\\\",\\\"reason\\\":\\\"KubeletHasSufficientMemory\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has no disk pressure\\\",\\\"reason\\\":\\\"KubeletHasNoDiskPressure\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"kubelet has sufficient PID available\\\",\\\"reason\\\":\\\"KubeletHasSufficientPID\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:37Z\\\",\\\"message\\\":\\\"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\\\",\\\"reason\\\":\\\"KubeletNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"images\\\":[{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c8a088031661d94022418e93fb63744c38e1c4cff93ea3b95c096a290c2b7a3\\\"],\\\"sizeBytes\\\":2981840865},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\"],\\\"sizeBytes\\\":1641503854},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:286bb0beab328954b0a86b7f066fd5a843b462d6acb2812df7ec788015cd32d4\\\",\\\"registry.redhat.io/redhat/redhat-operator-index@sha256:be02784ed82978c399102be1c6c9f2ca441be4d984e0fd7100c155dd4417ebbf\\\",\\\"registry.redhat.io/redhat/redhat-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1597684406},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\"],\\\"sizeBytes\\\":1261384762},{\\\"names\\\":[\\\"registry.redhat.io/redhat/community-operator-index@sha256:0d50962980a5aeecae2d99c98913fb0f46940164e41de0af2ba0e3dafe0d9017\\\",\\\"registry.redhat.io/redhat/community-operator-index@sha256:8d607fb6cc75ca36bca1e0a9c5bea5d1919b75db20733df69c64c8a10ee8083d\\\",\\\"registry.redhat.io/redhat/community-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1224304325},{\\\"names\\\":[\\\"registry.redhat.io/redhat/certified-operator-index@sha256:541db5b20a3d2199602b3b5ac80f09ea31498034e9ae3841238b03a39150f0d7\\\",\\\"registry.redhat.io/redhat/certified-operator-index@sha256:a4c5df55584cba56f00004a090923a5c6de2071add5eb1672a5e20aa646aad8c\\\",\\\"registry.redhat.io/redhat/certified-operator-index:v4.20\\\"],\\\"sizeBytes\\\":1126957757},{\\\"names\\\":[\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:b1c859067d6b7b785ab4977ed7137c5b3bb257234f7d7737a1d2836cef1576b5\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index@sha256:df08951924aa23b2333436a1d04b2dba56c366bb4f09d39ae3aedb980e4fb909\\\",\\\"registry.redhat.io/redhat/redhat-marketplace-index:v4.20\\\"],\\\"sizeBytes\\\":1079537324},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\"],\\\"sizeBytes\\\":1052707833},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8d1a1e4abe0326c3af89e9eaa4b7449dd2d5b6f9403c677e19b00b24947b1df9\\\"],\\\"sizeBytes\\\":989392005},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b2b1fc3d5bb4944cbd5b23b87566d7ba24b1b66f5a0465f76bcc05023191cc47\\\"],\\\"sizeBytes\\\":971668163},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\"],\\\"sizeBytes\\\":969078739},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7e3d6c8802ae53d6aecf38aa7b560d7892193806bdeb3d7c1637fac77c47fd1f\\\"],\\\"sizeBytes\\\":876488654},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\"],\\\"sizeBytes\\\":847332502},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:36c4867005702f0c4cbfcfa33f18a98596a6c9b1340b633c85ccef84a0c4f889\\\"],\\\"sizeBytes\\\":769516783},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\"],\\\"sizeBytes\\\":721591926},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\"],\\\"sizeBytes\\\":646867625},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\"],\\\"sizeBytes\\\":638910445},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:ae245c97fc463e876c3024efb806fa8f4efb13b3f06f1bdd3e7e1447f5a5dce4\\\"],\\\"sizeBytes\\\":617699779},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:d4926e304011637ca9df370a193896d685f0f3ffabbec234ec827abdbeb083f9\\\"],\\\"sizeBytes\\\":607756695},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\"],\\\"sizeBytes\\\":584721741},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\"],\\\"sizeBytes\\\":545674969},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:574d49b89604b8e8103abf57feee77812fe8cf441eafc17fdff95d57ca80645e\\\"],\\\"sizeBytes\\\":542463064},{\\\"names\\\":[\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\"],\\\"sizeBytes\\\":539380592},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9506bdcf97d5200cf2cf4cdf110aebafdd141a24f6589bf1e1cfe27bb7fc1ed2\\\"],\\\"sizeBytes\\\":533027808},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\"],\\\"sizeBytes\\\":528200501},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\"],\\\"sizeBytes\\\":527774342},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e5e8108294b086fdb797365e5a46badba9b3d866bdcddc8460a51e05a253753d\\\"],\\\"sizeBytes\\\":526632426},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5827f6ae3beb4853192e02cc18890467bd251b33070f36f9a105991e7e6d3c9b\\\"],\\\"sizeBytes\\\":522490210},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:66c8fe5d45ff249643dae75185dd2787ea1b0ae87d5699a8222149c07689557c\\\"],\\\"sizeBytes\\\":520141094},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:baf975b6944f2844860c440636e0d4b80b2fdc473d30f32ae7d6989f2fc2b135\\\"],\\\"sizeBytes\\\":519815758},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:765cf9c3ebf4df049ebc022beaaf52f52852cf89fb802034536ad91dd45db807\\\"],\\\"sizeBytes\\\":519539350},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:52e442bc8198ac925caff87ddd35b3107b7375d5afc9c2eb041ca4e79db72c6f\\\"],\\\"sizeBytes\\\":518690683},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:43b0e0b7e1955ee905e48799a62f50b8a8df553190415ce1f5550375c2507ca5\\\"],\\\"sizeBytes\\\":518251952},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:977a316fa3598eb575a4477dafc09bbf06fad21c4ec2867052225d74f2a9f366\\\"],\\\"sizeBytes\\\":511136541},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\"],\\\"sizeBytes\\\":510122097},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dbd8603d717c26901bcf9731b1e0392ae4bc08a270ed1eeb45839e44bed9607d\\\"],\\\"sizeBytes\\\":508941917},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\"],\\\"sizeBytes\\\":508318343},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7a726c68cebc9b08edd734a8bae5150ae5950f7734fe9b9c2a6e0d06f21cc095\\\"],\\\"sizeBytes\\\":498380948},{\\\"names\\\":[\\\"quay.io/crcont/ocp-release@sha256:82501261b9c63012ba3b83fe4d6703c0af5eb9c9151670eb90ae480b9507d761\\\"],\\\"sizeBytes\\\":497232440},{\\\"names\\\":[\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:4e4239621caed0b0d9132d167403631e9af86be9a395977f013e201ead281bb4\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner@sha256:c0b1bec73fdb6853eb3bd9e9733aee2d760ca09a33cfd94adf9ab7b706e83fa9\\\",\\\"registry.redhat.io/openshift4/ose-csi-external-provisioner:latest\\\"],\\\"sizeBytes\\\":491224335},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:b0f7abf2f97afd1127d9245d764338c6047bac1711b2cee43112570a85946360\\\"],\\\"sizeBytes\\\":490381192},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:21b12ff0c81c1d535e7c31aff3a73b1e9ca763e5f88037f59ade0dfab6ed8946\\\"],\\\"sizeBytes\\\":482632652},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:036ed6efe4cb5f5b90ee7f9ef5297c8591b8d67aa36b3c58b4fc5417622a140c\\\"],\\\"sizeBytes\\\":477561861},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:0fe5a041a2b99d736e82f1b4a6cd9792c5e23ded475e9f0742cd19234070f989\\\"],\\\"sizeBytes\\\":475327956},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\"],\\\"sizeBytes\\\":475137830},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:2599f32933f5fea6066ede54ad8f6150adb7bd9067892f251d5913121d5c630d\\\"],\\\"sizeBytes\\\":472771950},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:651bbe9d418f49c2c889d731df67cf5d88dff59dc03f5a1b5d4c8bb3ae001f1a\\\"],\\\"sizeBytes\\\":469976318},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:4fe612a1572df462d6a4b664a10bc2e6cad239648acbf8c0303f8fca5d2596c0\\\"],\\\"sizeBytes\\\":468393024},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a5bb05344dd2296077f5066e908ede0eea23f5a12fb78ef86a9513c88d3faaca\\\"],\\\"sizeBytes\\\":464375011},{\\\"names\\\":[\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\"],\\\"sizeBytes\\\":462844959}],\\\"nodeInfo\\\":{\\\"bootID\\\":\\\"f5d437ef-57ca-47db-98fb-05e744e6bfaa\\\",\\\"systemUUID\\\":\\\"ff88a58d-633e-4023-819b-155f3656b514\\\"}}}\" for node \"crc\": Internal error occurred: failed calling webhook \"node.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/node?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.598847 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.598877 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.699785 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.800084 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:37 crc kubenswrapper[5117]: E0123 08:54:37.900837 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.001179 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.102069 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.202871 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.303932 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.404446 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.505639 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.605821 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.706571 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.807670 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.868311 5117 eviction_manager.go:292] "Eviction manager: failed to get summary stats" err="failed to get node info: node \"crc\" not found" Jan 23 08:54:38 crc kubenswrapper[5117]: E0123 08:54:38.908748 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.009199 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.110171 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.211263 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.312457 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.413105 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.514297 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.615274 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.715716 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: I0123 08:54:39.769948 5117 kubelet_node_status.go:413] "Setting node annotation to enable volume controller attach/detach" Jan 23 08:54:39 crc kubenswrapper[5117]: I0123 08:54:39.770976 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:39 crc kubenswrapper[5117]: I0123 08:54:39.771042 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:39 crc kubenswrapper[5117]: I0123 08:54:39.771063 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.771592 5117 kubelet.go:3336] "No need to create a mirror pod, since failed to get node info from the cluster" err="node \"crc\" not found" node="crc" Jan 23 08:54:39 crc kubenswrapper[5117]: I0123 08:54:39.771887 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.772118 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.816296 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:39 crc kubenswrapper[5117]: E0123 08:54:39.916801 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.017779 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.118444 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.218784 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.319805 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.420865 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.521118 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.621905 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.722174 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.823055 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:40 crc kubenswrapper[5117]: E0123 08:54:40.923794 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.024168 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.124971 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.226094 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.326979 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.427646 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.528044 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.628501 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.729020 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.830020 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:41 crc kubenswrapper[5117]: E0123 08:54:41.930874 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.032066 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.132838 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.233587 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.334066 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.435216 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.535851 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.636564 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.737630 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.838009 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:42 crc kubenswrapper[5117]: E0123 08:54:42.938486 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.039173 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.140053 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.241164 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.341607 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.442724 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.543864 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.644674 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.745810 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.846894 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:43 crc kubenswrapper[5117]: E0123 08:54:43.947862 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.048594 5117 kubelet_node_status.go:515] "Error getting the current node from lister" err="node \"crc\" not found" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.103509 5117 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.150795 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.150859 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.150872 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.150889 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.150914 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.194376 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.207988 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.252593 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.252653 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.252676 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.252701 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.252722 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.308276 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.355525 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.355574 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.355588 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.355607 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.355622 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.410876 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.457387 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.457446 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.457458 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.457475 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.457486 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.510877 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-etcd/etcd-crc" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.559399 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.559475 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.559504 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.559534 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.559557 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.661973 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.662042 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.662061 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.662085 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.662106 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.697402 5117 apiserver.go:52] "Watching apiserver" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.703758 5117 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.704730 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/node-resolver-qccfb","openshift-kube-apiserver/kube-apiserver-crc","openshift-kube-scheduler/openshift-kube-scheduler-crc","openshift-multus/multus-additional-cni-plugins-ggtdd","openshift-network-operator/iptables-alerter-5jnd7","openshift-machine-config-operator/machine-config-daemon-qfh6g","openshift-network-diagnostics/network-check-target-fhkjl","openshift-network-node-identity/network-node-identity-dgvkt","openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn","openshift-image-registry/node-ca-b7cxh","openshift-machine-config-operator/kube-rbac-proxy-crio-crc","openshift-multus/multus-g7xdw","openshift-multus/network-metrics-daemon-gcn4t","openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6","openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv","openshift-etcd/etcd-crc","openshift-kube-controller-manager/kube-controller-manager-crc","openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5","openshift-ovn-kubernetes/ovnkube-node-6t5h9"] Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.706027 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.706569 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.706727 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.707103 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.707236 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.708088 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.708311 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.708480 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.708827 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.708896 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.710985 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.711784 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.711916 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.712745 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.713466 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.711012 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.715739 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.717393 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.715874 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.719615 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.720766 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.720821 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.720827 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.720837 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.722048 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.722704 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.724571 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.724680 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.724705 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.724803 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.724707 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.727335 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.728981 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.729083 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.729955 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.731490 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.732370 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.733477 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.733492 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.734037 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.734512 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.734595 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.736257 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.736268 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.736421 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.736486 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.737821 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.738035 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.738278 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.741457 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.742670 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.744401 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.744534 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.745031 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.745265 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.745457 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.746807 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.746891 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.746998 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.748542 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.756566 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.763934 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.763979 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.763992 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.764008 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.764019 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.767191 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.778279 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.787376 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d788132-7791-4db1-9057-4112a18f44fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wgfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wgfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-997cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.795787 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qccfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"586f9834-c55f-46f5-b903-ca1c6b6805b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djpnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qccfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.799699 5117 desired_state_of_world_populator.go:158] "Finished populating initial desired state of world" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.805108 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.811609 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d41b436-a78c-412b-b56c-54b8d73381e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v4m4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v4m4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfh6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.819252 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.824325 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b7cxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j98l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b7cxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.831298 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"17b87002-b798-480a-8e17-83053d698239\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-check-target-container]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-check-target-container\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-gwt8b\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-target-fhkjl\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.838846 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [iptables-alerter]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"iptables-alerter\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/iptables-alerter\\\",\\\"name\\\":\\\"iptables-alerter-script\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-dsgwk\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"iptables-alerter-5jnd7\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.839640 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.839795 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.839889 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.839968 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.840072 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.840255 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.840614 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz" (OuterVolumeSpecName: "kube-api-access-ws8zz") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "kube-api-access-ws8zz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841047 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841329 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841453 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841565 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841670 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841851 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841993 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841463 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842232 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841801 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7" (OuterVolumeSpecName: "kube-api-access-tknt7") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "kube-api-access-tknt7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841851 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.841996 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842012 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842339 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842532 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842564 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842587 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842609 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842634 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842213 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca" (OuterVolumeSpecName: "serviceca") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "serviceca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842414 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert" (OuterVolumeSpecName: "cert") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842659 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842713 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842742 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842825 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842851 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842871 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842889 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842915 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842920 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert" (OuterVolumeSpecName: "package-server-manager-serving-cert") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "package-server-manager-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842732 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca" (OuterVolumeSpecName: "service-ca") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842939 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.842974 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843049 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843121 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843188 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843217 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843261 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843283 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843749 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843780 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843187 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn" (OuterVolumeSpecName: "kube-api-access-xnxbn") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "kube-api-access-xnxbn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843483 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp" (OuterVolumeSpecName: "tmp") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843564 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843648 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843570 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert" (OuterVolumeSpecName: "console-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843785 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl" (OuterVolumeSpecName: "kube-api-access-twvbl") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "kube-api-access-twvbl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844031 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp" (OuterVolumeSpecName: "tmp") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844110 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.843819 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844213 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844225 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844273 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw" (OuterVolumeSpecName: "kube-api-access-9z4sw") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "kube-api-access-9z4sw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844294 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844433 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844704 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844696 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844750 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844789 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844817 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844843 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844867 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844890 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844913 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844942 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844963 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.844984 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845007 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845031 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") pod \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\" (UID: \"6ee8fbd3-1f81-4666-96da-5afc70819f1a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845055 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845078 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845101 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845122 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845160 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert" (OuterVolumeSpecName: "apiservice-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "apiservice-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845116 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf" (OuterVolumeSpecName: "kube-api-access-q4smf") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "kube-api-access-q4smf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845188 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845214 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845239 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845263 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845285 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845308 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845332 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845353 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845379 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845721 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845807 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845831 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845837 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp" (OuterVolumeSpecName: "tmp") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845854 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") pod \"af41de71-79cf-4590-bbe9-9e8b848862cb\" (UID: \"af41de71-79cf-4590-bbe9-9e8b848862cb\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845819 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845874 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845876 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config" (OuterVolumeSpecName: "config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845896 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845913 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845929 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845923 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config" (OuterVolumeSpecName: "config") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845944 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.845966 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk" (OuterVolumeSpecName: "kube-api-access-w94wk") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "kube-api-access-w94wk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846024 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846071 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846110 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846344 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846409 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846484 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846528 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") pod \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\" (UID: \"0dd0fbac-8c0d-4228-8faa-abbeedabf7db\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846566 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846603 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") pod \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\" (UID: \"a208c9c2-333b-4b4a-be0d-bc32ec38a821\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846649 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846692 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846731 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846770 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846809 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846263 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls" (OuterVolumeSpecName: "samples-operator-tls") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "samples-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846445 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token" (OuterVolumeSpecName: "node-bootstrap-token") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "node-bootstrap-token". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846525 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846546 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp" (OuterVolumeSpecName: "tmp") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846552 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config" (OuterVolumeSpecName: "auth-proxy-config") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846564 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf" (OuterVolumeSpecName: "kube-api-access-nmmzf") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "kube-api-access-nmmzf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846679 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq" (OuterVolumeSpecName: "kube-api-access-m26jq") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "kube-api-access-m26jq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846734 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca" (OuterVolumeSpecName: "etcd-service-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847662 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847705 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") pod \"d7e8f42f-dc0e-424b-bb56-5ec849834888\" (UID: \"d7e8f42f-dc0e-424b-bb56-5ec849834888\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847739 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847777 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847810 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847847 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847880 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847912 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847960 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848011 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848057 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848108 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848191 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848247 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") pod \"cc85e424-18b2-4924-920b-bd291a8c4b01\" (UID: \"cc85e424-18b2-4924-920b-bd291a8c4b01\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848290 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848366 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") pod \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\" (UID: \"a52afe44-fb37-46ed-a1f8-bf39727a3cbe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848422 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848480 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848541 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848592 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848643 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848691 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848741 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848791 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") pod \"6077b63e-53a2-4f96-9d56-1ce0324e4913\" (UID: \"6077b63e-53a2-4f96-9d56-1ce0324e4913\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848882 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848949 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849029 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849798 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849949 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") pod \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\" (UID: \"f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849996 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850034 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850079 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850250 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850311 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850363 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850428 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850491 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850551 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850601 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850639 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850682 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") pod \"f7e2c886-118e-43bb-bef1-c78134de392b\" (UID: \"f7e2c886-118e-43bb-bef1-c78134de392b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850720 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850761 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850807 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") pod \"c5f2bfad-70f6-4185-a3d9-81ce12720767\" (UID: \"c5f2bfad-70f6-4185-a3d9-81ce12720767\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850855 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850907 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850958 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851012 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") pod \"09cfa50b-4138-4585-a53e-64dd3ab73335\" (UID: \"09cfa50b-4138-4585-a53e-64dd3ab73335\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851053 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851092 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851126 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851199 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851236 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") pod \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\" (UID: \"584e1f4a-8205-47d7-8efb-3afc6017c4c9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851273 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") pod \"42a11a02-47e1-488f-b270-2679d3298b0e\" (UID: \"42a11a02-47e1-488f-b270-2679d3298b0e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851310 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851346 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") pod \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\" (UID: \"a555ff2e-0be6-46d5-897d-863bb92ae2b3\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851386 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851428 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851479 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851515 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851552 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851591 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851638 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") pod \"149b3c48-e17c-4a66-a835-d86dabf6ff13\" (UID: \"149b3c48-e17c-4a66-a835-d86dabf6ff13\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851692 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851741 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851809 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851875 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851951 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852015 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852067 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") pod \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\" (UID: \"5ebfebf6-3ecd-458e-943f-bb25b52e2718\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852106 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") pod \"a7a88189-c967-4640-879e-27665747f20c\" (UID: \"a7a88189-c967-4640-879e-27665747f20c\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852180 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852242 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852305 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") pod \"0effdbcf-dd7d-404d-9d48-77536d665a5d\" (UID: \"0effdbcf-dd7d-404d-9d48-77536d665a5d\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852372 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") pod \"b605f283-6f2e-42da-a838-54421690f7d0\" (UID: \"b605f283-6f2e-42da-a838-54421690f7d0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852415 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852476 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852522 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852598 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852640 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852681 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852716 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852753 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852800 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852845 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") pod \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\" (UID: \"20ce4d18-fe25-4696-ad7c-1bd2d6200a3e\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852888 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852930 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") pod \"ce090a97-9ab6-4c40-a719-64ff2acd9778\" (UID: \"ce090a97-9ab6-4c40-a719-64ff2acd9778\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852967 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853228 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") pod \"94a6e063-3d1a-4d44-875d-185291448c31\" (UID: \"94a6e063-3d1a-4d44-875d-185291448c31\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853395 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853472 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") pod \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\" (UID: \"e1d2a42d-af1d-4054-9618-ab545e0ed8b7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853518 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853555 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") pod \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\" (UID: \"b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853602 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853638 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853679 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") pod \"18f80adb-c1c3-49ba-8ee4-932c851d3897\" (UID: \"18f80adb-c1c3-49ba-8ee4-932c851d3897\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853726 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") pod \"e093be35-bb62-4843-b2e8-094545761610\" (UID: \"e093be35-bb62-4843-b2e8-094545761610\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853762 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853802 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") pod \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\" (UID: \"7fcc6409-8a0f-44c3-89e7-5aecd7610f8a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853844 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") pod \"2325ffef-9d5b-447f-b00e-3efc429acefe\" (UID: \"2325ffef-9d5b-447f-b00e-3efc429acefe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853884 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853922 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") pod \"01080b46-74f1-4191-8755-5152a57b3b25\" (UID: \"01080b46-74f1-4191-8755-5152a57b3b25\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846962 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd" (OuterVolumeSpecName: "kube-api-access-mjwtd") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "kube-api-access-mjwtd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.846993 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls" (OuterVolumeSpecName: "control-plane-machine-set-operator-tls") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "control-plane-machine-set-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847079 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847227 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz" (OuterVolumeSpecName: "kube-api-access-grwfz") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "kube-api-access-grwfz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854587 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h" (OuterVolumeSpecName: "kube-api-access-94l9h") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "kube-api-access-94l9h". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847300 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities" (OuterVolumeSpecName: "utilities") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847323 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config" (OuterVolumeSpecName: "config") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847288 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847494 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config" (OuterVolumeSpecName: "console-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.847931 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848259 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848275 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config" (OuterVolumeSpecName: "config") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848521 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx" (OuterVolumeSpecName: "kube-api-access-l9stx") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "kube-api-access-l9stx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848579 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities" (OuterVolumeSpecName: "utilities") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848596 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities" (OuterVolumeSpecName: "utilities") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848650 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca" (OuterVolumeSpecName: "service-ca") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848658 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848866 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls" (OuterVolumeSpecName: "machine-approver-tls") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "machine-approver-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.848681 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849037 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849293 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd" (OuterVolumeSpecName: "kube-api-access-8pskd") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "kube-api-access-8pskd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849437 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849466 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs" (OuterVolumeSpecName: "webhook-certs") pod "0dd0fbac-8c0d-4228-8faa-abbeedabf7db" (UID: "0dd0fbac-8c0d-4228-8faa-abbeedabf7db"). InnerVolumeSpecName "webhook-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849480 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities" (OuterVolumeSpecName: "utilities") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.849508 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850006 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850182 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "d7e8f42f-dc0e-424b-bb56-5ec849834888" (UID: "d7e8f42f-dc0e-424b-bb56-5ec849834888"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850575 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config" (OuterVolumeSpecName: "config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850646 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv" (OuterVolumeSpecName: "kube-api-access-6rmnv") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "kube-api-access-6rmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850751 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850823 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq" (OuterVolumeSpecName: "kube-api-access-d4tqq") pod "6ee8fbd3-1f81-4666-96da-5afc70819f1a" (UID: "6ee8fbd3-1f81-4666-96da-5afc70819f1a"). InnerVolumeSpecName "kube-api-access-d4tqq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850903 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b" (OuterVolumeSpecName: "kube-api-access-pgx6b") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "kube-api-access-pgx6b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.850903 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs" (OuterVolumeSpecName: "certs") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851041 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps" (OuterVolumeSpecName: "kube-api-access-d7cps") pod "af41de71-79cf-4590-bbe9-9e8b848862cb" (UID: "af41de71-79cf-4590-bbe9-9e8b848862cb"). InnerVolumeSpecName "kube-api-access-d7cps". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851208 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851240 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities" (OuterVolumeSpecName: "utilities") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851315 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp" (OuterVolumeSpecName: "kube-api-access-8nspp") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "kube-api-access-8nspp". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851495 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6" (OuterVolumeSpecName: "kube-api-access-tkdh6") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "kube-api-access-tkdh6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851520 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config" (OuterVolumeSpecName: "config") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851573 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl" (OuterVolumeSpecName: "kube-api-access-26xrl") pod "a208c9c2-333b-4b4a-be0d-bc32ec38a821" (UID: "a208c9c2-333b-4b4a-be0d-bc32ec38a821"). InnerVolumeSpecName "kube-api-access-26xrl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851577 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config" (OuterVolumeSpecName: "config") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851593 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem" (OuterVolumeSpecName: "ca-trust-extracted-pem") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "ca-trust-extracted-pem". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.851673 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852014 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6" (OuterVolumeSpecName: "kube-api-access-ftwb6") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "kube-api-access-ftwb6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852021 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume" (OuterVolumeSpecName: "config-volume") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852231 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852239 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.852553 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config" (OuterVolumeSpecName: "config") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853159 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853121 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853178 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk" (OuterVolumeSpecName: "kube-api-access-qqbfk") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "kube-api-access-qqbfk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853216 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853458 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9" (OuterVolumeSpecName: "kube-api-access-9vsz9") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "kube-api-access-9vsz9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853455 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s" (OuterVolumeSpecName: "kube-api-access-xfp5s") pod "cc85e424-18b2-4924-920b-bd291a8c4b01" (UID: "cc85e424-18b2-4924-920b-bd291a8c4b01"). InnerVolumeSpecName "kube-api-access-xfp5s". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853486 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w" (OuterVolumeSpecName: "kube-api-access-rzt4w") pod "a52afe44-fb37-46ed-a1f8-bf39727a3cbe" (UID: "a52afe44-fb37-46ed-a1f8-bf39727a3cbe"). InnerVolumeSpecName "kube-api-access-rzt4w". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853507 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities" (OuterVolumeSpecName: "utilities") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853298 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config" (OuterVolumeSpecName: "config") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853840 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.853874 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config" (OuterVolumeSpecName: "config") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854003 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854186 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp" (OuterVolumeSpecName: "tmp") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854462 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854502 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854780 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854850 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw" (OuterVolumeSpecName: "kube-api-access-5lcfw") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "kube-api-access-5lcfw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854882 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6" (OuterVolumeSpecName: "kube-api-access-pllx6") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "kube-api-access-pllx6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855270 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs" (OuterVolumeSpecName: "kube-api-access-l87hs") pod "5ebfebf6-3ecd-458e-943f-bb25b52e2718" (UID: "5ebfebf6-3ecd-458e-943f-bb25b52e2718"). InnerVolumeSpecName "kube-api-access-l87hs". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855415 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855661 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv" (OuterVolumeSpecName: "kube-api-access-dztfv") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "kube-api-access-dztfv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855722 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "b605f283-6f2e-42da-a838-54421690f7d0" (UID: "b605f283-6f2e-42da-a838-54421690f7d0"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.854918 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") pod \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\" (UID: \"6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855819 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") pod \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\" (UID: \"31fa8943-81cc-4750-a0b7-0fa9ab5af883\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855826 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert" (OuterVolumeSpecName: "webhook-cert") pod "a7a88189-c967-4640-879e-27665747f20c" (UID: "a7a88189-c967-4640-879e-27665747f20c"). InnerVolumeSpecName "webhook-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855881 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config" (OuterVolumeSpecName: "config") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855910 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.855943 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m" (OuterVolumeSpecName: "kube-api-access-4hb7m") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "kube-api-access-4hb7m". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856145 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config" (OuterVolumeSpecName: "config") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856211 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config" (OuterVolumeSpecName: "config") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856259 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle" (OuterVolumeSpecName: "signing-cabundle") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-cabundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856677 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856786 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856821 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856734 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856793 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj" (OuterVolumeSpecName: "kube-api-access-mfzkj") pod "0effdbcf-dd7d-404d-9d48-77536d665a5d" (UID: "0effdbcf-dd7d-404d-9d48-77536d665a5d"). InnerVolumeSpecName "kube-api-access-mfzkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856914 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") pod \"92dfbade-90b6-4169-8c07-72cff7f2c82b\" (UID: \"92dfbade-90b6-4169-8c07-72cff7f2c82b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.856992 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") pod \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\" (UID: \"d45be74c-0d98-4d18-90e4-f7ef1b6daaf7\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857017 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") pod \"d565531a-ff86-4608-9d19-767de01ac31b\" (UID: \"d565531a-ff86-4608-9d19-767de01ac31b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857068 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857076 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857097 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") pod \"593a3561-7760-45c5-8f91-5aaef7475d0f\" (UID: \"593a3561-7760-45c5-8f91-5aaef7475d0f\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857156 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857178 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857226 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857250 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") pod \"7599e0b6-bddf-4def-b7f2-0b32206e8651\" (UID: \"7599e0b6-bddf-4def-b7f2-0b32206e8651\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857296 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") pod \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\" (UID: \"dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857322 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") pod \"81e39f7b-62e4-4fc9-992a-6535ce127a02\" (UID: \"81e39f7b-62e4-4fc9-992a-6535ce127a02\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857346 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857404 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") pod \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\" (UID: \"f65c0ac1-8bca-454d-a2e6-e35cb418beac\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857430 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") pod \"736c54fe-349c-4bb9-870a-d1c1d1c03831\" (UID: \"736c54fe-349c-4bb9-870a-d1c1d1c03831\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857487 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") pod \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\" (UID: \"fc8db2c7-859d-47b3-a900-2bd0c0b2973b\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857522 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857554 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") pod \"301e1965-1754-483d-b6cc-bfae7038bbca\" (UID: \"301e1965-1754-483d-b6cc-bfae7038bbca\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857582 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") pod \"c491984c-7d4b-44aa-8c1e-d7974424fa47\" (UID: \"c491984c-7d4b-44aa-8c1e-d7974424fa47\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857606 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") pod \"16bdd140-dce1-464c-ab47-dd5798d1d256\" (UID: \"16bdd140-dce1-464c-ab47-dd5798d1d256\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857632 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857658 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") pod \"d19cb085-0c5b-4810-b654-ce7923221d90\" (UID: \"d19cb085-0c5b-4810-b654-ce7923221d90\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857683 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") pod \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\" (UID: \"869851b9-7ffb-4af0-b166-1d8aa40a5f80\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857712 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") pod \"7afa918d-be67-40a6-803c-d3b0ae99d815\" (UID: \"7afa918d-be67-40a6-803c-d3b0ae99d815\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857736 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") pod \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\" (UID: \"9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857760 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") pod \"f559dfa3-3917-43a2-97f6-61ddfda10e93\" (UID: \"f559dfa3-3917-43a2-97f6-61ddfda10e93\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857782 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") pod \"7df94c10-441d-4386-93a6-6730fb7bcde0\" (UID: \"7df94c10-441d-4386-93a6-6730fb7bcde0\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857804 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") pod \"9f71a554-e414-4bc3-96d2-674060397afe\" (UID: \"9f71a554-e414-4bc3-96d2-674060397afe\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857829 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") pod \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\" (UID: \"6edfcf45-925b-4eff-b940-95b6fc0b85d4\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857855 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") pod \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\" (UID: \"af33e427-6803-48c2-a76a-dd9deb7cbf9a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857949 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.857982 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") pod \"567683bd-0efc-4f21-b076-e28559628404\" (UID: \"567683bd-0efc-4f21-b076-e28559628404\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858008 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") pod \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\" (UID: \"71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858034 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") pod \"b4750666-1362-4001-abd0-6f89964cc621\" (UID: \"b4750666-1362-4001-abd0-6f89964cc621\") " Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858110 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858217 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg" (OuterVolumeSpecName: "kube-api-access-wbmqg") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "kube-api-access-wbmqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858586 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj" (OuterVolumeSpecName: "kube-api-access-qgrkj") pod "42a11a02-47e1-488f-b270-2679d3298b0e" (UID: "42a11a02-47e1-488f-b270-2679d3298b0e"). InnerVolumeSpecName "kube-api-access-qgrkj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858555 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"d582122d-1bf3-4b38-95a3-a89488b98725\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [kubecfg-setup]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [ovn-controller ovn-acl-logging kube-rbac-proxy-node kube-rbac-proxy-ovn-metrics northd nbdb sbdb ovnkube-controller]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-node\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-ovn-metrics\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-node-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"nbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"northd\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-acl-logging\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovn-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn/\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/dev/log\\\",\\\"name\\\":\\\"log-socket\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-controller\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/systemd/system\\\",\\\"name\\\":\\\"systemd-units\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host\\\",\\\"name\\\":\\\"host-slash\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/ovn-kubernetes/\\\",\\\"name\\\":\\\"host-run-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/run/systemd/private\\\",\\\"name\\\":\\\"run-systemd\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/cni-bin-dir\\\",\\\"name\\\":\\\"host-cni-bin\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d\\\",\\\"name\\\":\\\"host-cni-netd\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/networks/ovn-k8s-cni-overlay\\\",\\\"name\\\":\\\"host-var-lib-cni-networks-ovn-kubernetes\\\"},{\\\"mountPath\\\":\\\"/run/openvswitch\\\",\\\"name\\\":\\\"run-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/log/ovnkube/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/etc/openvswitch\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/lib/openvswitch\\\",\\\"name\\\":\\\"var-lib-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"sbdb\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/ovnkube-lib\\\",\\\"name\\\":\\\"ovnkube-script-lib\\\"},{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/run/ovn/\\\",\\\"name\\\":\\\"run-ovn\\\"},{\\\"mountPath\\\":\\\"/var/log/ovn\\\",\\\"name\\\":\\\"node-log\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kubecfg-setup\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/ovn/\\\",\\\"name\\\":\\\"etc-openvswitch\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-xn4sb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-node-6t5h9\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.858981 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859097 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr" (OuterVolumeSpecName: "kube-api-access-wj4qr") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "kube-api-access-wj4qr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859271 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config" (OuterVolumeSpecName: "mcd-auth-proxy-config") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "mcd-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859450 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv" (OuterVolumeSpecName: "kube-api-access-xxfcv") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "kube-api-access-xxfcv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859453 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859518 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b" (OuterVolumeSpecName: "kube-api-access-zsb9b") pod "09cfa50b-4138-4585-a53e-64dd3ab73335" (UID: "09cfa50b-4138-4585-a53e-64dd3ab73335"). InnerVolumeSpecName "kube-api-access-zsb9b". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859502 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859570 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859617 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859833 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.859952 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates" (OuterVolumeSpecName: "available-featuregates") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "available-featuregates". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.860022 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.860351 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" (UID: "b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.860760 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config" (OuterVolumeSpecName: "config") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.860973 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs" (OuterVolumeSpecName: "metrics-certs") pod "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" (UID: "f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4"). InnerVolumeSpecName "metrics-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861264 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "01080b46-74f1-4191-8755-5152a57b3b25" (UID: "01080b46-74f1-4191-8755-5152a57b3b25"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861417 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861504 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config" (OuterVolumeSpecName: "config") pod "c5f2bfad-70f6-4185-a3d9-81ce12720767" (UID: "c5f2bfad-70f6-4185-a3d9-81ce12720767"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861572 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861766 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr" (OuterVolumeSpecName: "kube-api-access-6g4lr") pod "f7e2c886-118e-43bb-bef1-c78134de392b" (UID: "f7e2c886-118e-43bb-bef1-c78134de392b"). InnerVolumeSpecName "kube-api-access-6g4lr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861829 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7" (OuterVolumeSpecName: "kube-api-access-hm9x7") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "kube-api-access-hm9x7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861832 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts" (OuterVolumeSpecName: "kube-api-access-4g8ts") pod "92dfbade-90b6-4169-8c07-72cff7f2c82b" (UID: "92dfbade-90b6-4169-8c07-72cff7f2c82b"). InnerVolumeSpecName "kube-api-access-4g8ts". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861846 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle" (OuterVolumeSpecName: "trusted-ca-bundle") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.861963 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth" (OuterVolumeSpecName: "stats-auth") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "stats-auth". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.862314 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv" (OuterVolumeSpecName: "kube-api-access-pddnv") pod "e093be35-bb62-4843-b2e8-094545761610" (UID: "e093be35-bb62-4843-b2e8-094545761610"). InnerVolumeSpecName "kube-api-access-pddnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.862469 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t" (OuterVolumeSpecName: "kube-api-access-zth6t") pod "6077b63e-53a2-4f96-9d56-1ce0324e4913" (UID: "6077b63e-53a2-4f96-9d56-1ce0324e4913"). InnerVolumeSpecName "kube-api-access-zth6t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.862488 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.862510 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.862913 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.863144 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:54:45.363103026 +0000 UTC m=+97.119228052 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863176 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config" (OuterVolumeSpecName: "console-oauth-config") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "console-oauth-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863339 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "16bdd140-dce1-464c-ab47-dd5798d1d256" (UID: "16bdd140-dce1-464c-ab47-dd5798d1d256"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863415 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities" (OuterVolumeSpecName: "utilities") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863432 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863459 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config" (OuterVolumeSpecName: "encryption-config") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "encryption-config". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863546 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863553 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca" (OuterVolumeSpecName: "etcd-serving-ca") pod "f559dfa3-3917-43a2-97f6-61ddfda10e93" (UID: "f559dfa3-3917-43a2-97f6-61ddfda10e93"). InnerVolumeSpecName "etcd-serving-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863672 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls" (OuterVolumeSpecName: "proxy-tls") pod "e1d2a42d-af1d-4054-9618-ab545e0ed8b7" (UID: "e1d2a42d-af1d-4054-9618-ab545e0ed8b7"). InnerVolumeSpecName "proxy-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863689 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca" (OuterVolumeSpecName: "image-import-ca") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "image-import-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.863817 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz" (OuterVolumeSpecName: "kube-api-access-7jjkz") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "kube-api-access-7jjkz". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.863985 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864037 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs" (OuterVolumeSpecName: "tmpfs") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "tmpfs". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.864104 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:45.364083633 +0000 UTC m=+97.120208659 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864420 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client" (OuterVolumeSpecName: "etcd-client") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-client". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864724 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864751 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls" (OuterVolumeSpecName: "image-registry-operator-tls") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "image-registry-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864844 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865046 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc" (OuterVolumeSpecName: "kube-api-access-zg8nc") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "kube-api-access-zg8nc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865068 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865185 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh" (OuterVolumeSpecName: "kube-api-access-m5lgh") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "kube-api-access-m5lgh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864869 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy" (OuterVolumeSpecName: "cni-binary-copy") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "cni-binary-copy". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.864856 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert" (OuterVolumeSpecName: "oauth-serving-cert") pod "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" (UID: "6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca"). InnerVolumeSpecName "oauth-serving-cert". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865384 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls" (OuterVolumeSpecName: "metrics-tls") pod "9f71a554-e414-4bc3-96d2-674060397afe" (UID: "9f71a554-e414-4bc3-96d2-674060397afe"). InnerVolumeSpecName "metrics-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865436 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9" (OuterVolumeSpecName: "kube-api-access-99zj9") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "kube-api-access-99zj9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865566 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle" (OuterVolumeSpecName: "service-ca-bundle") pod "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" (UID: "dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9"). InnerVolumeSpecName "service-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865740 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "2325ffef-9d5b-447f-b00e-3efc429acefe" (UID: "2325ffef-9d5b-447f-b00e-3efc429acefe"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865823 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config" (OuterVolumeSpecName: "multus-daemon-config") pod "81e39f7b-62e4-4fc9-992a-6535ce127a02" (UID: "81e39f7b-62e4-4fc9-992a-6535ce127a02"). InnerVolumeSpecName "multus-daemon-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865863 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca" (OuterVolumeSpecName: "etcd-ca") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "etcd-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865979 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "f65c0ac1-8bca-454d-a2e6-e35cb418beac" (UID: "f65c0ac1-8bca-454d-a2e6-e35cb418beac"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.865983 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images" (OuterVolumeSpecName: "images") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.866614 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images" (OuterVolumeSpecName: "images") pod "d565531a-ff86-4608-9d19-767de01ac31b" (UID: "d565531a-ff86-4608-9d19-767de01ac31b"). InnerVolumeSpecName "images". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.866182 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "584e1f4a-8205-47d7-8efb-3afc6017c4c9" (UID: "584e1f4a-8205-47d7-8efb-3afc6017c4c9"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.866704 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-conf-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867051 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867094 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867159 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d788132-7791-4db1-9057-4112a18f44fa-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867194 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-cni-bin\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867197 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867253 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867229 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-multus-certs\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867264 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867283 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867287 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867295 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867325 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xn4sb\" (UniqueName: \"kubernetes.io/projected/d582122d-1bf3-4b38-95a3-a89488b98725-kube-api-access-xn4sb\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867545 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867577 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-k8s-cni-cncf-io\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867600 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-hostroot\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867622 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cnibin\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867642 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867686 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867717 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-ovn\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867739 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-bin\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867762 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-config\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867786 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867819 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867819 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867852 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-node-log\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.867892 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d41b436-a78c-412b-b56c-54b8d73381e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868248 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2d41b436-a78c-412b-b56c-54b8d73381e6-rootfs\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868361 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-os-release\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868384 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-slash\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868498 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf" (OuterVolumeSpecName: "kube-api-access-6dmhf") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "kube-api-access-6dmhf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868520 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp" (OuterVolumeSpecName: "tmp") pod "7afa918d-be67-40a6-803c-d3b0ae99d815" (UID: "7afa918d-be67-40a6-803c-d3b0ae99d815"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868626 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868636 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9" (OuterVolumeSpecName: "kube-api-access-ddlk9") pod "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" (UID: "7fcc6409-8a0f-44c3-89e7-5aecd7610f8a"). InnerVolumeSpecName "kube-api-access-ddlk9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868641 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca" (OuterVolumeSpecName: "client-ca") pod "736c54fe-349c-4bb9-870a-d1c1d1c03831" (UID: "736c54fe-349c-4bb9-870a-d1c1d1c03831"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.869248 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities" (OuterVolumeSpecName: "utilities") pod "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" (UID: "9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.869526 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c" (OuterVolumeSpecName: "kube-api-access-8nb9c") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "kube-api-access-8nb9c". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.869585 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.869878 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.870373 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2" (OuterVolumeSpecName: "kube-api-access-ks6v2") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "kube-api-access-ks6v2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.870371 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert" (OuterVolumeSpecName: "srv-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "srv-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871050 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit" (OuterVolumeSpecName: "audit") pod "d19cb085-0c5b-4810-b654-ce7923221d90" (UID: "d19cb085-0c5b-4810-b654-ce7923221d90"). InnerVolumeSpecName "audit". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.868699 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-etc-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871308 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert" (OuterVolumeSpecName: "profile-collector-cert") pod "301e1965-1754-483d-b6cc-bfae7038bbca" (UID: "301e1965-1754-483d-b6cc-bfae7038bbca"). InnerVolumeSpecName "profile-collector-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871613 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg" (OuterVolumeSpecName: "kube-api-access-hckvg") pod "fc8db2c7-859d-47b3-a900-2bd0c0b2973b" (UID: "fc8db2c7-859d-47b3-a900-2bd0c0b2973b"). InnerVolumeSpecName "kube-api-access-hckvg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871283 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-env-overrides\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871688 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871705 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-netns\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871761 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.871817 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "6edfcf45-925b-4eff-b940-95b6fc0b85d4" (UID: "6edfcf45-925b-4eff-b940-95b6fc0b85d4"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872436 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-env-overrides\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872509 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d582122d-1bf3-4b38-95a3-a89488b98725-ovn-node-metrics-cert\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872547 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/586f9834-c55f-46f5-b903-ca1c6b6805b0-tmp-dir\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872571 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-cni-multus\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872597 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-system-cni-dir\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872619 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vbxkb\" (UniqueName: \"kubernetes.io/projected/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-kube-api-access-vbxkb\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872648 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-host\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872671 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-socket-dir-parent\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872692 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-56jgw\" (UniqueName: \"kubernetes.io/projected/70f944bb-0390-45c1-914f-5389215db1cd-kube-api-access-56jgw\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872721 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872747 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872771 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872796 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872820 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872848 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872874 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872901 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5wgfg\" (UniqueName: \"kubernetes.io/projected/8d788132-7791-4db1-9057-4112a18f44fa-kube-api-access-5wgfg\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872925 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70f944bb-0390-45c1-914f-5389215db1cd-cni-binary-copy\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872946 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-systemd-units\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872971 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.872998 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/586f9834-c55f-46f5-b903-ca1c6b6805b0-hosts-file\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.873019 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-system-cni-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.873040 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70f944bb-0390-45c1-914f-5389215db1cd-multus-daemon-config\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.874304 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "149b3c48-e17c-4a66-a835-d86dabf6ff13" (UID: "149b3c48-e17c-4a66-a835-d86dabf6ff13"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.874493 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.874661 5117 swap_util.go:74] "error creating dir to test if tmpfs noswap is enabled. Assuming not supported" mount path="" error="stat /var/lib/kubelet/plugins/kubernetes.io/empty-dir: no such file or directory" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.875259 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:45.375237748 +0000 UTC m=+97.131362774 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876010 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876156 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fb4dx\" (UniqueName: \"kubernetes.io/projected/ad3e9798-99d9-456f-b969-840508a6ac91-kube-api-access-fb4dx\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876298 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-kubelet\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876385 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-netd\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876484 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-serviceca\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876581 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8j98l\" (UniqueName: \"kubernetes.io/projected/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-kube-api-access-8j98l\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876673 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-cnibin\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876770 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-etc-kubernetes\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876856 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876943 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d41b436-a78c-412b-b56c-54b8d73381e6-proxy-tls\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877032 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-var-lib-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877117 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-os-release\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877235 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-netns\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877331 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-script-lib\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.875387 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.875700 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr" (OuterVolumeSpecName: "kube-api-access-z5rsr") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "kube-api-access-z5rsr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876297 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "af33e427-6803-48c2-a76a-dd9deb7cbf9a" (UID: "af33e427-6803-48c2-a76a-dd9deb7cbf9a"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.876671 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "7df94c10-441d-4386-93a6-6730fb7bcde0" (UID: "7df94c10-441d-4386-93a6-6730fb7bcde0"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877028 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key" (OuterVolumeSpecName: "signing-key") pod "ce090a97-9ab6-4c40-a719-64ff2acd9778" (UID: "ce090a97-9ab6-4c40-a719-64ff2acd9778"). InnerVolumeSpecName "signing-key". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877187 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls" (OuterVolumeSpecName: "machine-api-operator-tls") pod "c491984c-7d4b-44aa-8c1e-d7974424fa47" (UID: "c491984c-7d4b-44aa-8c1e-d7974424fa47"). InnerVolumeSpecName "machine-api-operator-tls". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877419 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-kubelet\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877628 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config" (OuterVolumeSpecName: "mcc-auth-proxy-config") pod "b4750666-1362-4001-abd0-6f89964cc621" (UID: "b4750666-1362-4001-abd0-6f89964cc621"). InnerVolumeSpecName "mcc-auth-proxy-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877850 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.877965 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878448 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5v4m4\" (UniqueName: \"kubernetes.io/projected/2d41b436-a78c-412b-b56c-54b8d73381e6-kube-api-access-5v4m4\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878515 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-log-socket\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878540 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-ovn-kubernetes\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878542 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"iptables-alerter-script\" (UniqueName: \"kubernetes.io/configmap/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-iptables-alerter-script\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878587 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-djpnc\" (UniqueName: \"kubernetes.io/projected/586f9834-c55f-46f5-b903-ca1c6b6805b0-kube-api-access-djpnc\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878613 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-cni-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878670 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878699 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-systemd\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.878955 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879005 5117 reconciler_common.go:299] "Volume detached for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/6ee8fbd3-1f81-4666-96da-5afc70819f1a-samples-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879020 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/f65c0ac1-8bca-454d-a2e6-e35cb418beac-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879033 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879068 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d4tqq\" (UniqueName: \"kubernetes.io/projected/6ee8fbd3-1f81-4666-96da-5afc70819f1a-kube-api-access-d4tqq\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879080 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tkdh6\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-kube-api-access-tkdh6\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879092 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l9stx\" (UniqueName: \"kubernetes.io/projected/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-kube-api-access-l9stx\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879104 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879116 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879165 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7afa918d-be67-40a6-803c-d3b0ae99d815-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879179 5117 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879190 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-grwfz\" (UniqueName: \"kubernetes.io/projected/31fa8943-81cc-4750-a0b7-0fa9ab5af883-kube-api-access-grwfz\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879202 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879238 5117 reconciler_common.go:299] "Volume detached for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-node-bootstrap-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879252 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879262 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879273 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879285 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/736c54fe-349c-4bb9-870a-d1c1d1c03831-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879319 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d7e8f42f-dc0e-424b-bb56-5ec849834888-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879331 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879344 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879359 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7cps\" (UniqueName: \"kubernetes.io/projected/af41de71-79cf-4590-bbe9-9e8b848862cb-kube-api-access-d7cps\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879372 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8pskd\" (UniqueName: \"kubernetes.io/projected/a555ff2e-0be6-46d5-897d-863bb92ae2b3-kube-api-access-8pskd\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879409 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879421 5117 reconciler_common.go:299] "Volume detached for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-machine-approver-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879433 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879445 5117 reconciler_common.go:299] "Volume detached for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879480 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nmmzf\" (UniqueName: \"kubernetes.io/projected/7df94c10-441d-4386-93a6-6730fb7bcde0-kube-api-access-nmmzf\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879493 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879503 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879516 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879525 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879557 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6rmnv\" (UniqueName: \"kubernetes.io/projected/b605f283-6f2e-42da-a838-54421690f7d0-kube-api-access-6rmnv\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879570 5117 reconciler_common.go:299] "Volume detached for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-webhook-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879583 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c5f2bfad-70f6-4185-a3d9-81ce12720767-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879593 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-26xrl\" (UniqueName: \"kubernetes.io/projected/a208c9c2-333b-4b4a-be0d-bc32ec38a821-kube-api-access-26xrl\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879604 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/92dfbade-90b6-4169-8c07-72cff7f2c82b-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879614 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ftwb6\" (UniqueName: \"kubernetes.io/projected/9f71a554-e414-4bc3-96d2-674060397afe-kube-api-access-ftwb6\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879568 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-identity-cm\" (UniqueName: \"kubernetes.io/configmap/fc4541ce-7789-4670-bc75-5c2868e52ce0-ovnkube-identity-cm\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.879648 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/09cfa50b-4138-4585-a53e-64dd3ab73335-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880055 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/567683bd-0efc-4f21-b076-e28559628404-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880073 5117 reconciler_common.go:299] "Volume detached for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-image-registry-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880084 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9vsz9\" (UniqueName: \"kubernetes.io/projected/c491984c-7d4b-44aa-8c1e-d7974424fa47-kube-api-access-9vsz9\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880094 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d7e8f42f-dc0e-424b-bb56-5ec849834888-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880102 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/01080b46-74f1-4191-8755-5152a57b3b25-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880112 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880120 5117 reconciler_common.go:299] "Volume detached for volume \"certs\" (UniqueName: \"kubernetes.io/secret/593a3561-7760-45c5-8f91-5aaef7475d0f-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880145 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880157 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880168 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880180 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/92dfbade-90b6-4169-8c07-72cff7f2c82b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880190 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/b4750666-1362-4001-abd0-6f89964cc621-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880202 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9e9b5059-1b3e-4067-a63d-2952cbe863af-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880235 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-94l9h\" (UniqueName: \"kubernetes.io/projected/16bdd140-dce1-464c-ab47-dd5798d1d256-kube-api-access-94l9h\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880246 5117 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880257 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xfp5s\" (UniqueName: \"kubernetes.io/projected/cc85e424-18b2-4924-920b-bd291a8c4b01-kube-api-access-xfp5s\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880270 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/d565531a-ff86-4608-9d19-767de01ac31b-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880281 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rzt4w\" (UniqueName: \"kubernetes.io/projected/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-kube-api-access-rzt4w\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880294 5117 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-ca-trust-extracted-pem\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880306 5117 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880319 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgx6b\" (UniqueName: \"kubernetes.io/projected/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-kube-api-access-pgx6b\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880330 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nspp\" (UniqueName: \"kubernetes.io/projected/a7a88189-c967-4640-879e-27665747f20c-kube-api-access-8nspp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880340 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880349 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7599e0b6-bddf-4def-b7f2-0b32206e8651-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880358 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zth6t\" (UniqueName: \"kubernetes.io/projected/6077b63e-53a2-4f96-9d56-1ce0324e4913-kube-api-access-zth6t\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880366 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880376 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880386 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qqbfk\" (UniqueName: \"kubernetes.io/projected/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-kube-api-access-qqbfk\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880395 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/18f80adb-c1c3-49ba-8ee4-932c851d3897-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880403 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4-metrics-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880413 5117 reconciler_common.go:299] "Volume detached for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-oauth-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880421 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880431 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880442 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wj4qr\" (UniqueName: \"kubernetes.io/projected/149b3c48-e17c-4a66-a835-d86dabf6ff13-kube-api-access-wj4qr\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880451 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/a555ff2e-0be6-46d5-897d-863bb92ae2b3-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880460 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880469 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880478 5117 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880487 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880495 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880504 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f7e2c886-118e-43bb-bef1-c78134de392b-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880512 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6g4lr\" (UniqueName: \"kubernetes.io/projected/f7e2c886-118e-43bb-bef1-c78134de392b-kube-api-access-6g4lr\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880521 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880532 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880542 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/c5f2bfad-70f6-4185-a3d9-81ce12720767-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880550 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4hb7m\" (UniqueName: \"kubernetes.io/projected/94a6e063-3d1a-4d44-875d-185291448c31-kube-api-access-4hb7m\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880559 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wbmqg\" (UniqueName: \"kubernetes.io/projected/18f80adb-c1c3-49ba-8ee4-932c851d3897-kube-api-access-wbmqg\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880568 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880576 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zsb9b\" (UniqueName: \"kubernetes.io/projected/09cfa50b-4138-4585-a53e-64dd3ab73335-kube-api-access-zsb9b\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880586 5117 reconciler_common.go:299] "Volume detached for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-cabundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880595 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-etcd-serving-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880603 5117 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880612 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880620 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/584e1f4a-8205-47d7-8efb-3afc6017c4c9-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880629 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qgrkj\" (UniqueName: \"kubernetes.io/projected/42a11a02-47e1-488f-b270-2679d3298b0e-kube-api-access-qgrkj\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880639 5117 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880648 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7599e0b6-bddf-4def-b7f2-0b32206e8651-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880657 5117 reconciler_common.go:299] "Volume detached for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-stats-auth\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880665 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880674 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880684 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/149b3c48-e17c-4a66-a835-d86dabf6ff13-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880693 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880701 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880709 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8nb9c\" (UniqueName: \"kubernetes.io/projected/6edfcf45-925b-4eff-b940-95b6fc0b85d4-kube-api-access-8nb9c\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880718 5117 reconciler_common.go:299] "Volume detached for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-mcd-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880728 5117 reconciler_common.go:299] "Volume detached for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-image-import-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880736 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880745 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l87hs\" (UniqueName: \"kubernetes.io/projected/5ebfebf6-3ecd-458e-943f-bb25b52e2718-kube-api-access-l87hs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880754 5117 reconciler_common.go:299] "Volume detached for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-webhook-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880762 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pllx6\" (UniqueName: \"kubernetes.io/projected/81e39f7b-62e4-4fc9-992a-6535ce127a02-kube-api-access-pllx6\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880770 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dztfv\" (UniqueName: \"kubernetes.io/projected/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-kube-api-access-dztfv\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880779 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mfzkj\" (UniqueName: \"kubernetes.io/projected/0effdbcf-dd7d-404d-9d48-77536d665a5d-kube-api-access-mfzkj\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880789 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/b605f283-6f2e-42da-a838-54421690f7d0-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880798 5117 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/6edfcf45-925b-4eff-b940-95b6fc0b85d4-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880807 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/7df94c10-441d-4386-93a6-6730fb7bcde0-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880816 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xxfcv\" (UniqueName: \"kubernetes.io/projected/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-kube-api-access-xxfcv\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880825 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880833 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880841 5117 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880849 5117 reconciler_common.go:299] "Volume detached for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/16bdd140-dce1-464c-ab47-dd5798d1d256-available-featuregates\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880857 5117 reconciler_common.go:299] "Volume detached for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/c491984c-7d4b-44aa-8c1e-d7974424fa47-machine-api-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880867 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880877 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z5rsr\" (UniqueName: \"kubernetes.io/projected/af33e427-6803-48c2-a76a-dd9deb7cbf9a-kube-api-access-z5rsr\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880885 5117 reconciler_common.go:299] "Volume detached for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/ce090a97-9ab6-4c40-a719-64ff2acd9778-signing-key\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880893 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7afa918d-be67-40a6-803c-d3b0ae99d815-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880902 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5lcfw\" (UniqueName: \"kubernetes.io/projected/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-kube-api-access-5lcfw\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880910 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-proxy-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880919 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/f65c0ac1-8bca-454d-a2e6-e35cb418beac-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880927 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880935 5117 reconciler_common.go:299] "Volume detached for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-cni-binary-copy\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880943 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/2325ffef-9d5b-447f-b00e-3efc429acefe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880952 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pddnv\" (UniqueName: \"kubernetes.io/projected/e093be35-bb62-4843-b2e8-094545761610-kube-api-access-pddnv\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880961 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/9f71a554-e414-4bc3-96d2-674060397afe-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880969 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ddlk9\" (UniqueName: \"kubernetes.io/projected/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a-kube-api-access-ddlk9\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880978 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zg8nc\" (UniqueName: \"kubernetes.io/projected/2325ffef-9d5b-447f-b00e-3efc429acefe-kube-api-access-zg8nc\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880986 5117 reconciler_common.go:299] "Volume detached for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/f559dfa3-3917-43a2-97f6-61ddfda10e93-encryption-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.880996 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/01080b46-74f1-4191-8755-5152a57b3b25-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881004 5117 reconciler_common.go:299] "Volume detached for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-oauth-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881012 5117 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-images\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881020 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881028 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-4g8ts\" (UniqueName: \"kubernetes.io/projected/92dfbade-90b6-4169-8c07-72cff7f2c82b-kube-api-access-4g8ts\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881038 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-99zj9\" (UniqueName: \"kubernetes.io/projected/d565531a-ff86-4608-9d19-767de01ac31b-kube-api-access-99zj9\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881047 5117 reconciler_common.go:299] "Volume detached for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-srv-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881055 5117 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/301e1965-1754-483d-b6cc-bfae7038bbca-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881062 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7afa918d-be67-40a6-803c-d3b0ae99d815-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881070 5117 reconciler_common.go:299] "Volume detached for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/301e1965-1754-483d-b6cc-bfae7038bbca-profile-collector-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881078 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-service-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881088 5117 reconciler_common.go:299] "Volume detached for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/81e39f7b-62e4-4fc9-992a-6535ce127a02-multus-daemon-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881096 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881105 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/f65c0ac1-8bca-454d-a2e6-e35cb418beac-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881113 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6dmhf\" (UniqueName: \"kubernetes.io/projected/736c54fe-349c-4bb9-870a-d1c1d1c03831-kube-api-access-6dmhf\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881122 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hckvg\" (UniqueName: \"kubernetes.io/projected/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-kube-api-access-hckvg\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881149 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7jjkz\" (UniqueName: \"kubernetes.io/projected/301e1965-1754-483d-b6cc-bfae7038bbca-kube-api-access-7jjkz\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881161 5117 reconciler_common.go:299] "Volume detached for volume \"images\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-images\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881173 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/16bdd140-dce1-464c-ab47-dd5798d1d256-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881186 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m5lgh\" (UniqueName: \"kubernetes.io/projected/d19cb085-0c5b-4810-b654-ce7923221d90-kube-api-access-m5lgh\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881197 5117 reconciler_common.go:299] "Volume detached for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/d19cb085-0c5b-4810-b654-ce7923221d90-audit\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881207 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7afa918d-be67-40a6-803c-d3b0ae99d815-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881215 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881224 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hm9x7\" (UniqueName: \"kubernetes.io/projected/f559dfa3-3917-43a2-97f6-61ddfda10e93-kube-api-access-hm9x7\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881231 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/7df94c10-441d-4386-93a6-6730fb7bcde0-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881239 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/9f71a554-e414-4bc3-96d2-674060397afe-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881250 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881258 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/af33e427-6803-48c2-a76a-dd9deb7cbf9a-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881270 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881278 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ks6v2\" (UniqueName: \"kubernetes.io/projected/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-kube-api-access-ks6v2\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881288 5117 reconciler_common.go:299] "Volume detached for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/b4750666-1362-4001-abd0-6f89964cc621-mcc-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881298 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ws8zz\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-kube-api-access-ws8zz\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881305 5117 reconciler_common.go:299] "Volume detached for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/a7a88189-c967-4640-879e-27665747f20c-tmpfs\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881314 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881322 5117 reconciler_common.go:299] "Volume detached for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/d7e8f42f-dc0e-424b-bb56-5ec849834888-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881331 5117 reconciler_common.go:299] "Volume detached for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/5ebfebf6-3ecd-458e-943f-bb25b52e2718-serviceca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881338 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/c5f2bfad-70f6-4185-a3d9-81ce12720767-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881346 5117 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/9e9b5059-1b3e-4067-a63d-2952cbe863af-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881355 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-tknt7\" (UniqueName: \"kubernetes.io/projected/584e1f4a-8205-47d7-8efb-3afc6017c4c9-kube-api-access-tknt7\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881364 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2325ffef-9d5b-447f-b00e-3efc429acefe-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881372 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6077b63e-53a2-4f96-9d56-1ce0324e4913-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881379 5117 reconciler_common.go:299] "Volume detached for volume \"cert\" (UniqueName: \"kubernetes.io/secret/a52afe44-fb37-46ed-a1f8-bf39727a3cbe-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881388 5117 reconciler_common.go:299] "Volume detached for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca-console-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881395 5117 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/d565531a-ff86-4608-9d19-767de01ac31b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881403 5117 reconciler_common.go:299] "Volume detached for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/a208c9c2-333b-4b4a-be0d-bc32ec38a821-package-server-manager-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881412 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/736c54fe-349c-4bb9-870a-d1c1d1c03831-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881420 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/c491984c-7d4b-44aa-8c1e-d7974424fa47-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881427 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-twvbl\" (UniqueName: \"kubernetes.io/projected/b4750666-1362-4001-abd0-6f89964cc621-kube-api-access-twvbl\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881436 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881443 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/cc85e424-18b2-4924-920b-bd291a8c4b01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881452 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xnxbn\" (UniqueName: \"kubernetes.io/projected/ce090a97-9ab6-4c40-a719-64ff2acd9778-kube-api-access-xnxbn\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881460 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881467 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/d19cb085-0c5b-4810-b654-ce7923221d90-etcd-client\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881475 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9z4sw\" (UniqueName: \"kubernetes.io/projected/e1d2a42d-af1d-4054-9618-ab545e0ed8b7-kube-api-access-9z4sw\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881483 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/a555ff2e-0be6-46d5-897d-863bb92ae2b3-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881491 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/09cfa50b-4138-4585-a53e-64dd3ab73335-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881498 5117 reconciler_common.go:299] "Volume detached for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/92dfbade-90b6-4169-8c07-72cff7f2c82b-metrics-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881508 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/f65c0ac1-8bca-454d-a2e6-e35cb418beac-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881519 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/736c54fe-349c-4bb9-870a-d1c1d1c03831-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881527 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/c5f2bfad-70f6-4185-a3d9-81ce12720767-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881534 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881543 5117 reconciler_common.go:299] "Volume detached for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-auth-proxy-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881550 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/6edfcf45-925b-4eff-b940-95b6fc0b85d4-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881558 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881566 5117 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f559dfa3-3917-43a2-97f6-61ddfda10e93-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881575 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/fc8db2c7-859d-47b3-a900-2bd0c0b2973b-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881583 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/6077b63e-53a2-4f96-9d56-1ce0324e4913-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881591 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-q4smf\" (UniqueName: \"kubernetes.io/projected/0dd0fbac-8c0d-4228-8faa-abbeedabf7db-kube-api-access-q4smf\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881599 5117 reconciler_common.go:299] "Volume detached for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/a7a88189-c967-4640-879e-27665747f20c-apiservice-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881608 5117 reconciler_common.go:299] "Volume detached for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/42a11a02-47e1-488f-b270-2679d3298b0e-control-plane-machine-set-operator-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881616 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mjwtd\" (UniqueName: \"kubernetes.io/projected/869851b9-7ffb-4af0-b166-1d8aa40a5f80-kube-api-access-mjwtd\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881624 5117 reconciler_common.go:299] "Volume detached for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/567683bd-0efc-4f21-b076-e28559628404-etcd-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881632 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m26jq\" (UniqueName: \"kubernetes.io/projected/567683bd-0efc-4f21-b076-e28559628404-kube-api-access-m26jq\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881641 5117 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/9e9b5059-1b3e-4067-a63d-2952cbe863af-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.881648 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w94wk\" (UniqueName: \"kubernetes.io/projected/01080b46-74f1-4191-8755-5152a57b3b25-kube-api-access-w94wk\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.883528 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap" (OuterVolumeSpecName: "whereabouts-flatfile-configmap") pod "869851b9-7ffb-4af0-b166-1d8aa40a5f80" (UID: "869851b9-7ffb-4af0-b166-1d8aa40a5f80"). InnerVolumeSpecName "whereabouts-flatfile-configmap". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.883626 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "31fa8943-81cc-4750-a0b7-0fa9ab5af883" (UID: "31fa8943-81cc-4750-a0b7-0fa9ab5af883"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.887125 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/34177974-8d82-49d2-a763-391d0df3bbd8-metrics-tls\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.887259 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/fc4541ce-7789-4670-bc75-5c2868e52ce0-webhook-cert\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.887190 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"c906ceb9-6e2c-479a-9f89-742dda69a66c\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-apiserver-check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"},\\\"containerID\\\":\\\"cri-o://510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"265m\\\",\\\"memory\\\":\\\"1Gi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-bundle-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-regeneration-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d\\\",\\\"exitCode\\\":255,\\\"finishedAt\\\":\\\"2026-01-23T08:54:18Z\\\",\\\"message\\\":\\\"var.go:172] \\\\\\\"Feature gate default state\\\\\\\" feature=\\\\\\\"ClientsAllowCBOR\\\\\\\" enabled=false\\\\nW0123 08:54:18.702780 1 builder.go:272] unable to get owner reference (falling back to namespace): pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\nI0123 08:54:18.702987 1 builder.go:304] check-endpoints version v0.0.0-unknown-c3d9642-c3d9642\\\\nI0123 08:54:18.703833 1 dynamic_serving_content.go:116] \\\\\\\"Loaded a new cert/key pair\\\\\\\" name=\\\\\\\"serving-cert::/tmp/serving-cert-2681865112/tls.crt::/tmp/serving-cert-2681865112/tls.key\\\\\\\"\\\\nI0123 08:54:18.869071 1 requestheader_controller.go:255] Loaded a new request header values for RequestHeaderAuthRequestController\\\\nI0123 08:54:18.873080 1 maxinflight.go:139] \\\\\\\"Initialized nonMutatingChan\\\\\\\" len=400\\\\nI0123 08:54:18.873108 1 maxinflight.go:145] \\\\\\\"Initialized mutatingChan\\\\\\\" len=200\\\\nI0123 08:54:18.873170 1 maxinflight.go:116] \\\\\\\"Set denominator for readonly requests\\\\\\\" limit=400\\\\nI0123 08:54:18.873179 1 maxinflight.go:120] \\\\\\\"Set denominator for mutating requests\\\\\\\" limit=200\\\\nI0123 08:54:18.882226 1 secure_serving.go:57] Forcing use of http/1.1 only\\\\nW0123 08:54:18.882250 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_ECDSA_WITH_AES_128_CBC_SHA256' detected.\\\\nW0123 08:54:18.882257 1 secure_serving.go:69] Use of insecure cipher 'TLS_ECDHE_RSA_WITH_AES_128_CBC_SHA256' detected.\\\\nI0123 08:54:18.882253 1 genericapiserver.go:546] MuxAndDiscoveryComplete has all endpoints registered and discovery information is complete\\\\nW0123 08:54:18.882263 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_GCM_SHA256' detected.\\\\nW0123 08:54:18.882298 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_GCM_SHA384' detected.\\\\nW0123 08:54:18.882302 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_128_CBC_SHA' detected.\\\\nW0123 08:54:18.882306 1 secure_serving.go:69] Use of insecure cipher 'TLS_RSA_WITH_AES_256_CBC_SHA' detected.\\\\nF0123 08:54:18.884111 1 cmd.go:182] pods \\\\\\\"kube-apiserver-crc\\\\\\\" not found\\\\n\\\",\\\"reason\\\":\\\"Error\\\",\\\"startedAt\\\":\\\"2026-01-23T08:54:18Z\\\"}},\\\"name\\\":\\\"kube-apiserver-check-endpoints\\\",\\\"ready\\\":false,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":3,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"message\\\":\\\"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\\\",\\\"reason\\\":\\\"CrashLoopBackOff\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-apiserver-insecure-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:53:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/kube-apiserver\\\",\\\"name\\\":\\\"audit-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:53:08Z\\\"}}\" for pod \"openshift-kube-apiserver\"/\"kube-apiserver-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.887676 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.887699 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.887713 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.887777 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:45.387758861 +0000 UTC m=+97.143883887 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.889855 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.889886 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.889862 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l" (OuterVolumeSpecName: "kube-api-access-sbc2l") pod "593a3561-7760-45c5-8f91-5aaef7475d0f" (UID: "593a3561-7760-45c5-8f91-5aaef7475d0f"). InnerVolumeSpecName "kube-api-access-sbc2l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.889899 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.889996 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:45.389973622 +0000 UTC m=+97.146098718 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.891456 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m7xz2\" (UniqueName: \"kubernetes.io/projected/34177974-8d82-49d2-a763-391d0df3bbd8-kube-api-access-m7xz2\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.893965 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf" (OuterVolumeSpecName: "kube-api-access-ptkcf") pod "7599e0b6-bddf-4def-b7f2-0b32206e8651" (UID: "7599e0b6-bddf-4def-b7f2-0b32206e8651"). InnerVolumeSpecName "kube-api-access-ptkcf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.894594 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "567683bd-0efc-4f21-b076-e28559628404" (UID: "567683bd-0efc-4f21-b076-e28559628404"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.895516 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dsgwk\" (UniqueName: \"kubernetes.io/projected/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-kube-api-access-dsgwk\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.896782 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" (UID: "d45be74c-0d98-4d18-90e4-f7ef1b6daaf7"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.896869 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate" (OuterVolumeSpecName: "default-certificate") pod "18f80adb-c1c3-49ba-8ee4-932c851d3897" (UID: "18f80adb-c1c3-49ba-8ee4-932c851d3897"). InnerVolumeSpecName "default-certificate". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.897285 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" (UID: "20ce4d18-fe25-4696-ad7c-1bd2d6200a3e"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.897372 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca" (OuterVolumeSpecName: "client-ca") pod "a555ff2e-0be6-46d5-897d-863bb92ae2b3" (UID: "a555ff2e-0be6-46d5-897d-863bb92ae2b3"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.898498 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8nt2j\" (UniqueName: \"kubernetes.io/projected/fc4541ce-7789-4670-bc75-5c2868e52ce0-kube-api-access-8nt2j\") pod \"network-node-identity-dgvkt\" (UID: \"fc4541ce-7789-4670-bc75-5c2868e52ce0\") " pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.899461 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b2eb2937-e949-4b86-8d71-9cdbaa968054\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:03Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8794c97d1edfd02d317fd806304d3ee74adfa38c1c6c90062491b0c4a7f0467\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://0bfe30757bc73cbc8b856ff7d9d585995d1a3e47779114005149743244a6f380\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://f7f64e4f8c3cec7d99e275022f64107f71183b68fd53497362f1a58dd32dfa60\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:e504172345491d90bbbf1e7e45488e73073f4c6d7c2355245871051596fc85db\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-scheduler-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://2158a444f2e5ed189533f6a6fa74845441f50423e82571eb5347decc879e0adf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"wait-for-host-port\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"15m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://2158a444f2e5ed189533f6a6fa74845441f50423e82571eb5347decc879e0adf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:53:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:53:08Z\\\"}}\" for pod \"openshift-kube-scheduler\"/\"openshift-kube-scheduler-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.911039 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"f863fff9-286a-45fa-b8f0-8a86994b8440\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [check-endpoints]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"check-endpoints\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-l7w75\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-diagnostics\"/\"network-check-source-5bb8f5cd97-xdvz5\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.920458 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" (UID: "71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.921875 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "94a6e063-3d1a-4d44-875d-185291448c31" (UID: "94a6e063-3d1a-4d44-875d-185291448c31"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.922686 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"fc4541ce-7789-4670-bc75-5c2868e52ce0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [webhook approver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"approver\\\",\\\"ready\\\":false,\\\"restartCount\\\":6,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"webhook\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/webhook-cert/\\\",\\\"name\\\":\\\"webhook-cert\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/ovnkube-identity-config\\\",\\\"name\\\":\\\"ovnkube-identity-cm\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8nt2j\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-node-identity\"/\"network-node-identity-dgvkt\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.927862 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.932716 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"8d788132-7791-4db1-9057-4112a18f44fa\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-rbac-proxy ovnkube-cluster-manager]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/pki/tls/metrics-cert\\\",\\\"name\\\":\\\"ovn-control-plane-metrics-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wgfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:174629230f874ae7d9ceda909ef45aced0cc8b21537851a0aceca55b0685b122\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"ovnkube-cluster-manager\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/run/ovnkube-config/\\\",\\\"name\\\":\\\"ovnkube-config\\\"},{\\\"mountPath\\\":\\\"/env\\\",\\\"name\\\":\\\"env-overrides\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5wgfg\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-ovn-kubernetes\"/\"ovnkube-control-plane-57b78d8988-997cn\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.941819 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-dns/node-resolver-qccfb" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"586f9834-c55f-46f5-b903-ca1c6b6805b0\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [dns-node-resolver]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5c5d7468f6838b6a714482e62ea956659212f3415ec8f69989f75eb6d8744a6e\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"dns-node-resolver\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/hosts\\\",\\\"name\\\":\\\"hosts-file\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-djpnc\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-dns\"/\"node-resolver-qccfb\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.952599 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-g7xdw" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"70f944bb-0390-45c1-914f-5389215db1cd\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/run/multus/cni/net.d\\\",\\\"name\\\":\\\"multus-cni-dir\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/run/multus\\\",\\\"name\\\":\\\"multus-socket-dir-parent\\\"},{\\\"mountPath\\\":\\\"/run/k8s.cni.cncf.io\\\",\\\"name\\\":\\\"host-run-k8s-cni-cncf-io\\\"},{\\\"mountPath\\\":\\\"/run/netns\\\",\\\"name\\\":\\\"host-run-netns\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/bin\\\",\\\"name\\\":\\\"host-var-lib-cni-bin\\\"},{\\\"mountPath\\\":\\\"/var/lib/cni/multus\\\",\\\"name\\\":\\\"host-var-lib-cni-multus\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"host-var-lib-kubelet\\\"},{\\\"mountPath\\\":\\\"/hostroot\\\",\\\"name\\\":\\\"hostroot\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/net.d\\\",\\\"name\\\":\\\"multus-conf-dir\\\"},{\\\"mountPath\\\":\\\"/etc/cni/net.d/multus.d\\\",\\\"name\\\":\\\"multus-daemon-config\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/cni/multus/certs\\\",\\\"name\\\":\\\"host-run-multus-certs\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kubernetes\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-56jgw\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-g7xdw\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.966046 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"4988c01f-7a73-4d59-b2b4-bc4c1d02c5e0\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://05c6e942f8fb1cecb26e15deafc78dd49746f38cfe0758e6d7919bb9fc3e6dea\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy-crio\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"20m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/lib/kubelet\\\",\\\"name\\\":\\\"var-lib-kubelet\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e7de6f11c0a4d297de34306d843f20b8f5ecfaa8a9834696edb6cc5a0a96b1d3\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://e7de6f11c0a4d297de34306d843f20b8f5ecfaa8a9834696edb6cc5a0a96b1d3\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:53:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":65534,\\\"supplementalGroups\\\":[65534],\\\"uid\\\":65534}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var\\\",\\\"name\\\":\\\"var-lib-kubelet\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:53:08Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"kube-rbac-proxy-crio-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.969760 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.969933 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.970036 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.970114 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.970210 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:44Z","lastTransitionTime":"2026-01-23T08:54:44Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.976730 5117 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.976850 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [networking-console-plugin]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:fbdfe828b092b23e6d4480daf3e0216aada6debaf1ef1b314a0a31e73ebf13c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"networking-console-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":4,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/cert\\\",\\\"name\\\":\\\"networking-console-plugin-cert\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/etc/nginx/nginx.conf\\\",\\\"name\\\":\\\"nginx-conf\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":null,\\\"podIPs\\\":null}}\" for pod \"openshift-network-console\"/\"networking-console-plugin-5ff7774fd9-nljh6\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982457 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982636 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-tuning-conf-dir\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982643 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982707 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982757 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982788 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5wgfg\" (UniqueName: \"kubernetes.io/projected/8d788132-7791-4db1-9057-4112a18f44fa-kube-api-access-5wgfg\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982819 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70f944bb-0390-45c1-914f-5389215db1cd-cni-binary-copy\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982845 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-systemd-units\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982884 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/586f9834-c55f-46f5-b903-ca1c6b6805b0-hosts-file\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982908 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-system-cni-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982931 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70f944bb-0390-45c1-914f-5389215db1cd-multus-daemon-config\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982966 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fb4dx\" (UniqueName: \"kubernetes.io/projected/ad3e9798-99d9-456f-b969-840508a6ac91-kube-api-access-fb4dx\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982972 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hosts-file\" (UniqueName: \"kubernetes.io/host-path/586f9834-c55f-46f5-b903-ca1c6b6805b0-hosts-file\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.982989 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-kubelet\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983012 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-netd\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983041 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-serviceca\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983066 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8j98l\" (UniqueName: \"kubernetes.io/projected/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-kube-api-access-8j98l\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983091 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-cnibin\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983114 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-etc-kubernetes\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983160 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d41b436-a78c-412b-b56c-54b8d73381e6-proxy-tls\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983188 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-var-lib-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983211 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-os-release\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983232 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-netns\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983106 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-system-cni-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983261 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-script-lib\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983286 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-kubelet\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983312 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983341 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5v4m4\" (UniqueName: \"kubernetes.io/projected/2d41b436-a78c-412b-b56c-54b8d73381e6-kube-api-access-5v4m4\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983367 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-log-socket\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983395 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-ovn-kubernetes\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983417 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-djpnc\" (UniqueName: \"kubernetes.io/projected/586f9834-c55f-46f5-b903-ca1c6b6805b0-kube-api-access-djpnc\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983497 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-cni-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983526 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-systemd\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983522 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-cnibin\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983562 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-conf-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983579 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-kubernetes\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-etc-kubernetes\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983608 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983629 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-kubelet\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-kubelet\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983634 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d788132-7791-4db1-9057-4112a18f44fa-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983679 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-daemon-config\" (UniqueName: \"kubernetes.io/configmap/70f944bb-0390-45c1-914f-5389215db1cd-multus-daemon-config\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983701 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-cni-bin\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983739 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-multus-certs\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983772 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xn4sb\" (UniqueName: \"kubernetes.io/projected/d582122d-1bf3-4b38-95a3-a89488b98725-kube-api-access-xn4sb\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983870 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983901 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-k8s-cni-cncf-io\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983905 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-ovnkube-config\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983924 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-etc-kube\" (UniqueName: \"kubernetes.io/host-path/34177974-8d82-49d2-a763-391d0df3bbd8-host-etc-kube\") pod \"network-operator-7bdcf4f5bd-7fjxv\" (UID: \"34177974-8d82-49d2-a763-391d0df3bbd8\") " pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983935 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-hostroot\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983974 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-systemd\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984068 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-log-socket\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984098 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-ovn-kubernetes\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984159 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serviceca\" (UniqueName: \"kubernetes.io/configmap/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-serviceca\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984191 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/70f944bb-0390-45c1-914f-5389215db1cd-cni-binary-copy\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984309 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-kubelet\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984314 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-conf-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-conf-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984373 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-var-lib-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984477 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-os-release\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984506 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-netd\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984524 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-netns\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.983162 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-systemd-units\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.984577 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984618 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-script-lib\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984661 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-multus-certs\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-multus-certs\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984685 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-bin\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-cni-bin\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984769 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-k8s-cni-cncf-io\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-run-k8s-cni-cncf-io\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984890 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-cni-dir\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-cni-dir\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984912 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"hostroot\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-hostroot\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.984972 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cnibin\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985014 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985034 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985051 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-ovn\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985049 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-env-overrides\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985068 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-bin\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985083 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-config\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985103 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985124 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-node-log\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985159 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d41b436-a78c-412b-b56c-54b8d73381e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985178 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2d41b436-a78c-412b-b56c-54b8d73381e6-rootfs\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985193 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-os-release\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985208 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-slash\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985228 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-etc-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985245 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-env-overrides\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985268 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-netns\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985291 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d582122d-1bf3-4b38-95a3-a89488b98725-ovn-node-metrics-cert\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985315 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/586f9834-c55f-46f5-b903-ca1c6b6805b0-tmp-dir\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985338 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-cni-multus\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985365 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-system-cni-dir\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985393 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vbxkb\" (UniqueName: \"kubernetes.io/projected/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-kube-api-access-vbxkb\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985417 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-host\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985439 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-socket-dir-parent\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985461 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-56jgw\" (UniqueName: \"kubernetes.io/projected/70f944bb-0390-45c1-914f-5389215db1cd-kube-api-access-56jgw\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985483 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985540 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/31fa8943-81cc-4750-a0b7-0fa9ab5af883-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985555 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985567 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sbc2l\" (UniqueName: \"kubernetes.io/projected/593a3561-7760-45c5-8f91-5aaef7475d0f-kube-api-access-sbc2l\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985580 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ptkcf\" (UniqueName: \"kubernetes.io/projected/7599e0b6-bddf-4def-b7f2-0b32206e8651-kube-api-access-ptkcf\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985588 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cnibin\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cnibin\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985551 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cni-sysctl-allowlist\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985595 5117 reconciler_common.go:299] "Volume detached for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/869851b9-7ffb-4af0-b166-1d8aa40a5f80-whereabouts-flatfile-configmap\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985628 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985636 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985640 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/567683bd-0efc-4f21-b076-e28559628404-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985654 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/a555ff2e-0be6-46d5-897d-863bb92ae2b3-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985673 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985684 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-ovn\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.985710 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-bin\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.986003 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-env-overrides\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.986033 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-netns\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.986812 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2d41b436-a78c-412b-b56c-54b8d73381e6\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [machine-config-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/tls/private\\\",\\\"name\\\":\\\"proxy-tls\\\"},{\\\"mountPath\\\":\\\"/etc/kube-rbac-proxy\\\",\\\"name\\\":\\\"mcd-auth-proxy-config\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v4m4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9414357f9345a841e0565265700ecc6637f846c83bd5908dbb7b306432465115\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"machine-config-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/rootfs\\\",\\\"name\\\":\\\"rootfs\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-5v4m4\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-machine-config-operator\"/\"machine-config-daemon-qfh6g\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.986966 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"whereabouts-flatfile-configmap\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-whereabouts-flatfile-configmap\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987021 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host\" (UniqueName: \"kubernetes.io/host-path/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-host\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987333 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/586f9834-c55f-46f5-b903-ca1c6b6805b0-tmp-dir\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987382 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-multus\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-host-var-lib-cni-multus\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987412 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"system-cni-dir\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-system-cni-dir\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987624 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"rootfs\" (UniqueName: \"kubernetes.io/host-path/2d41b436-a78c-412b-b56c-54b8d73381e6-rootfs\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987663 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/428b39f5-eb1c-4f65-b7a4-eeb6e84860cc-host-slash\") pod \"iptables-alerter-5jnd7\" (UID: \"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc\") " pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987694 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-node-log\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.987876 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/2d41b436-a78c-412b-b56c-54b8d73381e6-proxy-tls\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988062 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"multus-socket-dir-parent\" (UniqueName: \"kubernetes.io/host-path/70f944bb-0390-45c1-914f-5389215db1cd-multus-socket-dir-parent\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988068 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-slash\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: E0123 08:54:44.988257 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs podName:ad3e9798-99d9-456f-b969-840508a6ac91 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:45.48823548 +0000 UTC m=+97.244360496 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs") pod "network-metrics-daemon-gcn4t" (UID: "ad3e9798-99d9-456f-b969-840508a6ac91") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988269 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-etc-openvswitch\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988296 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"os-release\" (UniqueName: \"kubernetes.io/host-path/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-os-release\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988236 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcd-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/2d41b436-a78c-412b-b56c-54b8d73381e6-mcd-auth-proxy-config\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988216 5117 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/9e9b5059-1b3e-4067-a63d-2952cbe863af-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988400 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/94a6e063-3d1a-4d44-875d-185291448c31-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988423 5117 reconciler_common.go:299] "Volume detached for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/18f80adb-c1c3-49ba-8ee4-932c851d3897-default-certificate\") on node \"crc\" DevicePath \"\"" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.988819 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-config\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.990490 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d788132-7791-4db1-9057-4112a18f44fa-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.991548 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-binary-copy\" (UniqueName: \"kubernetes.io/configmap/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-cni-binary-copy\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:44 crc kubenswrapper[5117]: I0123 08:54:44.992311 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d582122d-1bf3-4b38-95a3-a89488b98725-ovn-node-metrics-cert\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.001236 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8j98l\" (UniqueName: \"kubernetes.io/projected/2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8-kube-api-access-8j98l\") pod \"node-ca-b7cxh\" (UID: \"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\") " pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.001798 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fb4dx\" (UniqueName: \"kubernetes.io/projected/ad3e9798-99d9-456f-b969-840508a6ac91-kube-api-access-fb4dx\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.002422 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5wgfg\" (UniqueName: \"kubernetes.io/projected/8d788132-7791-4db1-9057-4112a18f44fa-kube-api-access-5wgfg\") pod \"ovnkube-control-plane-57b78d8988-997cn\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.002571 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xn4sb\" (UniqueName: \"kubernetes.io/projected/d582122d-1bf3-4b38-95a3-a89488b98725-kube-api-access-xn4sb\") pod \"ovnkube-node-6t5h9\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.004486 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vbxkb\" (UniqueName: \"kubernetes.io/projected/3fb9d7cb-4569-4674-b9bd-78ee34ca14a3-kube-api-access-vbxkb\") pod \"multus-additional-cni-plugins-ggtdd\" (UID: \"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\") " pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.005346 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5v4m4\" (UniqueName: \"kubernetes.io/projected/2d41b436-a78c-412b-b56c-54b8d73381e6-kube-api-access-5v4m4\") pod \"machine-config-daemon-qfh6g\" (UID: \"2d41b436-a78c-412b-b56c-54b8d73381e6\") " pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.006363 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-etcd/etcd-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"b4931ea8-92b1-4a9e-ba2b-f6fd757157c6\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:12Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:24Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"},\\\"containerID\\\":\\\"cri-o://f6e52f86f24ee729f545c9bad677bb4ddd8adbaf01e1ef51ad8d1e58846b7ad4\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"300m\\\",\\\"memory\\\":\\\"600Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://5fa8b4bcf2a4167a8ad401e18de0801667f0bb8d2fef588b2571b86bdfdf2b1e\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-metrics\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"40m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://e41d19d165367b61fd737269feaec71d83b06650aa554fb0f344cb47660de49d\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-readyz\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd/\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://025dc7d6404c15d6e7227ab68e50229d35a3b4db2d72550da05d7a9bf9595453\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:bf05b9b2ba66351a6c59f4259fb377f62237a00af3b4f0b95f64409e2f25770e\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-rev\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:14Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/lib/etcd\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://4456a3a289dc8737df18d0c9f29a8c7368f3a40bb66355f63905a4c95a7c1ecb\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcdctl\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:13Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/manifests\\\",\\\"name\\\":\\\"static-pod-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd/\\\",\\\"name\\\":\\\"data-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://d8f4aea5fe77a64b87e80b61b975c69eee71967bc4201d8f0c39796b6f067385\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"setup\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://d8f4aea5fe77a64b87e80b61b975c69eee71967bc4201d8f0c39796b6f067385\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:53:10Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:53:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/log/etcd\\\",\\\"name\\\":\\\"log-dir\\\"},{\\\"mountPath\\\":\\\"/var/lib/etcd-auto-backup\\\",\\\"name\\\":\\\"etcd-auto-backup-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://b1c8813cbc8ffbf1a3babf5e3578eff17e7aaaf05c002a410486e4a03e6b1cdf\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-ensure-env-vars\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://b1c8813cbc8ffbf1a3babf5e3578eff17e7aaaf05c002a410486e4a03e6b1cdf\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:53:11Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}}},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"},\\\"containerID\\\":\\\"cri-o://ae5584925e81fff2a00f157d8af42ac7e7bb2fd8737463d7ba3be0e5a116625c\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:9e388ee2b3562b6267447cbcc4b95ca7a61bf361840d36a682480da671b83612\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"etcd-resources-copy\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"60Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"terminated\\\":{\\\"containerID\\\":\\\"cri-o://ae5584925e81fff2a00f157d8af42ac7e7bb2fd8737463d7ba3be0e5a116625c\\\",\\\"exitCode\\\":0,\\\"finishedAt\\\":\\\"2026-01-23T08:53:12Z\\\",\\\"reason\\\":\\\"Completed\\\",\\\"startedAt\\\":\\\"2026-01-23T08:53:12Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/usr/local/bin\\\",\\\"name\\\":\\\"usr-local-bin\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"}]}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:53:08Z\\\"}}\" for pod \"openshift-etcd\"/\"etcd-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.007571 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-djpnc\" (UniqueName: \"kubernetes.io/projected/586f9834-c55f-46f5-b903-ca1c6b6805b0-kube-api-access-djpnc\") pod \"node-resolver-qccfb\" (UID: \"586f9834-c55f-46f5-b903-ca1c6b6805b0\") " pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.008649 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-56jgw\" (UniqueName: \"kubernetes.io/projected/70f944bb-0390-45c1-914f-5389215db1cd-kube-api-access-56jgw\") pod \"multus-g7xdw\" (UID: \"70f944bb-0390-45c1-914f-5389215db1cd\") " pod="openshift-multus/multus-g7xdw" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.019602 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"34177974-8d82-49d2-a763-391d0df3bbd8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-operator]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:a8581a82ba5c8343a743aa302c4848249d8c32a9f2cd10fa68d89d835a1bdf8b\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{\\\"terminated\\\":{\\\"exitCode\\\":137,\\\"finishedAt\\\":null,\\\"message\\\":\\\"The container could not be located when the pod was deleted. The container used to be Running\\\",\\\"reason\\\":\\\"ContainerStatusUnknown\\\",\\\"startedAt\\\":null}},\\\"name\\\":\\\"network-operator\\\",\\\"ready\\\":false,\\\"restartCount\\\":5,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes\\\",\\\"name\\\":\\\"host-etc-kube\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/serving-cert\\\",\\\"name\\\":\\\"metrics-tls\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-m7xz2\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}]}}\" for pod \"openshift-network-operator\"/\"network-operator-7bdcf4f5bd-7fjxv\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.027585 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-image-registry/node-ca-b7cxh" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [node-ca]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:dcb03ccba25366bbdf74cbab6738e7ef1f97f62760886ec445a40cdf29b60418\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"node-ca\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/tmp/serviceca\\\",\\\"name\\\":\\\"serviceca\\\"},{\\\"mountPath\\\":\\\"/etc/docker/certs.d\\\",\\\"name\\\":\\\"host\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-8j98l\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-image-registry\"/\"node-ca-b7cxh\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.031250 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.037972 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-node-identity/network-node-identity-dgvkt" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.038019 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/network-metrics-daemon-gcn4t" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"ad3e9798-99d9-456f-b969-840508a6ac91\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [network-metrics-daemon kube-rbac-proxy]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:16d5a229c172bde2f4238e8a88602fd6351d80b262f35484740a979d8b3567a5\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-rbac-proxy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/metrics\\\",\\\"name\\\":\\\"metrics-certs\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fb4dx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:49b34ce0d25eec7a6077f4bf21bf7d4e64e598d28785a20b9ee3594423b7de14\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"network-metrics-daemon\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"ContainerCreating\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-fb4dx\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"network-metrics-daemon-gcn4t\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.048108 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-operator/iptables-alerter-5jnd7" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.051828 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3\\\"},\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"type\\\":\\\"Initialized\\\"},{\\\"type\\\":\\\"Ready\\\"},{\\\"type\\\":\\\"ContainersReady\\\"},{\\\"type\\\":\\\"PodScheduled\\\"}],\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with incomplete status: [egress-router-binary-copy cni-plugins bond-cni-plugin routeoverride-cni whereabouts-cni-bincopy whereabouts-cni]\\\",\\\"reason\\\":\\\"ContainersNotInitialized\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:54:44Z\\\",\\\"message\\\":\\\"containers with unready status: [kube-multus-additional-cni-plugins]\\\",\\\"reason\\\":\\\"ContainersNotReady\\\",\\\"status\\\":\\\"False\\\",\\\"type\\\":\\\"ContainersReady\\\"}],\\\"containerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:85f1323d589d7af13b096b1f9b438b9dfe08f3fab37534e2780e6490a665bf05\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-multus-additional-cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"initContainerStatuses\\\":[{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:5a2a7b3c2f1598189d8880e6aa15ab11a65b201f25012f77ba41e7487a60729a\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"egress-router-binary-copy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:1b55c029f731ebbde3c5580eef98a588264f4d6a8ae667805c9521dd1ecf1d5d\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cni-plugins\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/tuning/\\\",\\\"name\\\":\\\"tuning-conf-dir\\\"},{\\\"mountPath\\\":\\\"/sysctls\\\",\\\"name\\\":\\\"cni-sysctl-allowlist\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:6efa070ceb93cc5fc2e76eab6d9c96ac3c4f8812085d0b6eb6e3f513b5bac782\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"bond-cni-plugin\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:c3454e762466e22e2a893650b9781823558bc6fdfda2aa4188aff3cb819014c4\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"routeoverride-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni-bincopy\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/entrypoint\\\",\\\"name\\\":\\\"cni-binary-copy\\\"},{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/os-release\\\",\\\"name\\\":\\\"os-release\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]},{\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:be2edaed22535093bdb486afe5960ff4f3b0bd96f88dc1753b584cc28184a0b0\\\",\\\"imageID\\\":\\\"\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"whereabouts-cni\\\",\\\"ready\\\":false,\\\"restartCount\\\":0,\\\"started\\\":false,\\\"state\\\":{\\\"waiting\\\":{\\\"reason\\\":\\\"PodInitializing\\\"}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/host/opt/cni/bin\\\",\\\"name\\\":\\\"cnibin\\\"},{\\\"mountPath\\\":\\\"/host/etc/cni/net.d\\\",\\\"name\\\":\\\"system-cni-dir\\\"},{\\\"mountPath\\\":\\\"/etc/whereabouts/config\\\",\\\"name\\\":\\\"whereabouts-flatfile-configmap\\\"},{\\\"mountPath\\\":\\\"/var/run/secrets/kubernetes.io/serviceaccount\\\",\\\"name\\\":\\\"kube-api-access-vbxkb\\\",\\\"readOnly\\\":true,\\\"recursiveReadOnly\\\":\\\"Disabled\\\"}]}],\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:54:44Z\\\"}}\" for pod \"openshift-multus\"/\"multus-additional-cni-plugins-ggtdd\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.054182 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.062328 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.064475 5117 status_manager.go:919] "Failed to update status for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" err="failed to patch status \"{\\\"metadata\\\":{\\\"uid\\\":\\\"791c9f89-5670-451d-b441-107b10c09e16\\\"},\\\"status\\\":{\\\"conditions\\\":[{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:11Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodReadyToStartContainers\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Initialized\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"Ready\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:35Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"ContainersReady\\\"},{\\\"lastProbeTime\\\":null,\\\"lastTransitionTime\\\":\\\"2026-01-23T08:53:08Z\\\",\\\"status\\\":\\\"True\\\",\\\"type\\\":\\\"PodScheduled\\\"}],\\\"containerStatuses\\\":[{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://a186274b94913495d46da21de467d9602aa5642fe66f59d1d0936ad0c257f299\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:7c6a47106effd9e9a41131e2bf6c832b80cd77b3439334f760b35b0729f2fb00\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"cluster-policy-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"10m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"},\\\"containerID\\\":\\\"cri-o://250a0dfb6e83a36400a2c1fee2a02069a3ac81a98ee32800a2e3fc5895460f6a\\\",\\\"image\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"imageID\\\":\\\"quay.io/openshift-release-dev/ocp-v4.0-art-dev@sha256:8a46fa8feeea5d04fd602559027f8bacc97e12bbf8e33793dca08e812e1f8825\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"60m\\\",\\\"memory\\\":\\\"200Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:09Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://44b22cd33c6dea6d7f7dba5fa840f09b94303c50382f756c820b3528d1f3e83a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-cert-syncer\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:10Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]},{\\\"allocatedResources\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"},\\\"containerID\\\":\\\"cri-o://7be69ecce38fd901dfc42bc01438a8261ad83a71307874878dee2cb2ef5b1b1a\\\",\\\"image\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"imageID\\\":\\\"quay.io/crcont/openshift-crc-cluster-kube-controller-manager-operator@sha256:f69b9cc9b9cfde726109a9e12b80a3eefa472d7e29159df0fbc7143c48983cd6\\\",\\\"lastState\\\":{},\\\"name\\\":\\\"kube-controller-manager-recovery-controller\\\",\\\"ready\\\":true,\\\"resources\\\":{\\\"requests\\\":{\\\"cpu\\\":\\\"5m\\\",\\\"memory\\\":\\\"50Mi\\\"}},\\\"restartCount\\\":0,\\\"started\\\":true,\\\"state\\\":{\\\"running\\\":{\\\"startedAt\\\":\\\"2026-01-23T08:53:11Z\\\"}},\\\"user\\\":{\\\"linux\\\":{\\\"gid\\\":0,\\\"supplementalGroups\\\":[0],\\\"uid\\\":0}},\\\"volumeMounts\\\":[{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-resources\\\",\\\"name\\\":\\\"resource-dir\\\"},{\\\"mountPath\\\":\\\"/etc/kubernetes/static-pod-certs\\\",\\\"name\\\":\\\"cert-dir\\\"},{\\\"mountPath\\\":\\\"/tmp\\\",\\\"name\\\":\\\"tmp-dir\\\"},{\\\"mountPath\\\":\\\"/etc/pki/ca-trust/extracted/pem\\\",\\\"name\\\":\\\"ca-trust-dir\\\"},{\\\"mountPath\\\":\\\"/var/run/kubernetes\\\",\\\"name\\\":\\\"var-run-kubernetes\\\"}]}],\\\"hostIP\\\":\\\"192.168.126.11\\\",\\\"hostIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"phase\\\":\\\"Running\\\",\\\"podIP\\\":\\\"192.168.126.11\\\",\\\"podIPs\\\":[{\\\"ip\\\":\\\"192.168.126.11\\\"}],\\\"startTime\\\":\\\"2026-01-23T08:53:08Z\\\"}}\" for pod \"openshift-kube-controller-manager\"/\"kube-controller-manager-crc\": Internal error occurred: failed calling webhook \"pod.network-node-identity.openshift.io\": failed to call webhook: Post \"https://127.0.0.1:9743/pod?timeout=10s\": dial tcp 127.0.0.1:9743: connect: connection refused" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.071287 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/node-resolver-qccfb" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.072946 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.072998 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.073014 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.073035 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.073050 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: W0123 08:54:45.074801 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod8d788132_7791_4db1_9057_4112a18f44fa.slice/crio-6d95578c1a46fd52a4661b939852ab04a85fb7501de06ae4b4c69a4e5c6f7906 WatchSource:0}: Error finding container 6d95578c1a46fd52a4661b939852ab04a85fb7501de06ae4b4c69a4e5c6f7906: Status 404 returned error can't find the container with id 6d95578c1a46fd52a4661b939852ab04a85fb7501de06ae4b4c69a4e5c6f7906 Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.078306 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/node-ca-b7cxh" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.084474 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-g7xdw" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.091415 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" Jan 23 08:54:45 crc kubenswrapper[5117]: W0123 08:54:45.091737 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2d41b436_a78c_412b_b56c_54b8d73381e6.slice/crio-077ac093ea1b09fa2cb7de26aa97372c04f056399303f4558182c1e4208f29f7 WatchSource:0}: Error finding container 077ac093ea1b09fa2cb7de26aa97372c04f056399303f4558182c1e4208f29f7: Status 404 returned error can't find the container with id 077ac093ea1b09fa2cb7de26aa97372c04f056399303f4558182c1e4208f29f7 Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.098436 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:45 crc kubenswrapper[5117]: W0123 08:54:45.100705 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod586f9834_c55f_46f5_b903_ca1c6b6805b0.slice/crio-12b6f8e4d498adc5d08f843245bd9ed2505080141d4d240dd4b0e0683772351e WatchSource:0}: Error finding container 12b6f8e4d498adc5d08f843245bd9ed2505080141d4d240dd4b0e0683772351e: Status 404 returned error can't find the container with id 12b6f8e4d498adc5d08f843245bd9ed2505080141d4d240dd4b0e0683772351e Jan 23 08:54:45 crc kubenswrapper[5117]: W0123 08:54:45.133217 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2fd20aa5_9c6b_4f89_ab4d_03adb84c34a8.slice/crio-b8104bba47de20023e6e760222fdfb8d94e59cd405ba7a4c36a05326f251156d WatchSource:0}: Error finding container b8104bba47de20023e6e760222fdfb8d94e59cd405ba7a4c36a05326f251156d: Status 404 returned error can't find the container with id b8104bba47de20023e6e760222fdfb8d94e59cd405ba7a4c36a05326f251156d Jan 23 08:54:45 crc kubenswrapper[5117]: W0123 08:54:45.134372 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod70f944bb_0390_45c1_914f_5389215db1cd.slice/crio-fd428d67a2ac77d94337c612b9fbe9e503b8cc6a6594aa9c94159c02da4e4e41 WatchSource:0}: Error finding container fd428d67a2ac77d94337c612b9fbe9e503b8cc6a6594aa9c94159c02da4e4e41: Status 404 returned error can't find the container with id fd428d67a2ac77d94337c612b9fbe9e503b8cc6a6594aa9c94159c02da4e4e41 Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.174857 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.174898 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.174908 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.174924 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.174936 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.200590 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qccfb" event={"ID":"586f9834-c55f-46f5-b903-ca1c6b6805b0","Type":"ContainerStarted","Data":"12b6f8e4d498adc5d08f843245bd9ed2505080141d4d240dd4b0e0683772351e"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.202154 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"077ac093ea1b09fa2cb7de26aa97372c04f056399303f4558182c1e4208f29f7"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.206970 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" event={"ID":"8d788132-7791-4db1-9057-4112a18f44fa","Type":"ContainerStarted","Data":"6d95578c1a46fd52a4661b939852ab04a85fb7501de06ae4b4c69a4e5c6f7906"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.210502 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"6084f85be741c75adcd74742d78d46d9d219195e5d3b1468642a515be2ebb094"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.212229 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"87599575c899cc245520f59be906aeb601d24b0ad2b5be3297518f9534812e5d"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.213200 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b7cxh" event={"ID":"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8","Type":"ContainerStarted","Data":"b8104bba47de20023e6e760222fdfb8d94e59cd405ba7a4c36a05326f251156d"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.214918 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d6df16699d20b696486e211d138e9f7deb2b77aaf6467c605a6b22d70fdbdd83"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.216157 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerStarted","Data":"cd80dea9dc7e7d842625c6e19be8a0c8c68443c97d0dffe24a787ce0c36bbbdc"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.217378 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4ac0ecfc7ece57bc4f3f31d01cb19a6818f6353b61be15b1a1bdc8daa3e00bef"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.218144 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7xdw" event={"ID":"70f944bb-0390-45c1-914f-5389215db1cd","Type":"ContainerStarted","Data":"fd428d67a2ac77d94337c612b9fbe9e503b8cc6a6594aa9c94159c02da4e4e41"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.277189 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.277252 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.277264 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.277282 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.277293 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.381926 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.381974 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.381988 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.382004 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.382015 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.392521 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.392678 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.392713 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.392757 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.392827 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.392840 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:54:46.392804572 +0000 UTC m=+98.148929598 (durationBeforeRetry 1s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.392892 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:46.392874364 +0000 UTC m=+98.148999420 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.392917 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393148 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393168 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393180 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393225 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:46.393216403 +0000 UTC m=+98.149341429 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393284 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393295 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393302 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393329 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:46.393321696 +0000 UTC m=+98.149446722 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393394 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.393459 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:46.393426619 +0000 UTC m=+98.149551695 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.485223 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.485598 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.485611 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.485628 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.485640 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.493818 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.493967 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: E0123 08:54:45.494028 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs podName:ad3e9798-99d9-456f-b969-840508a6ac91 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:46.494014702 +0000 UTC m=+98.250139728 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs") pod "network-metrics-daemon-gcn4t" (UID: "ad3e9798-99d9-456f-b969-840508a6ac91") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.587704 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.587756 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.587768 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.587784 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.587794 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.690276 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.690316 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.690325 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.690339 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.690352 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.793665 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.793815 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.793825 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.793838 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.793849 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.896484 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.896544 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.896557 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.896574 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.896586 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.998635 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.998684 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.998712 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.998731 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:45 crc kubenswrapper[5117]: I0123 08:54:45.998742 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:45Z","lastTransitionTime":"2026-01-23T08:54:45Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.100846 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.100886 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.100896 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.100910 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.100921 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.203388 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.203698 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.203710 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.203728 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.203740 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.222859 5117 generic.go:358] "Generic (PLEG): container finished" podID="3fb9d7cb-4569-4674-b9bd-78ee34ca14a3" containerID="e37ee5766380aa6b7fb3cf77fafee2f927cc49d57f045bddec8da587a8243b77" exitCode=0 Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.222936 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerDied","Data":"e37ee5766380aa6b7fb3cf77fafee2f927cc49d57f045bddec8da587a8243b77"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.228496 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"4c97a288b06b200efb50eff85a9f635fc8ea17ba471b6d74b03cd6981c006c45"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.228542 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-node-identity/network-node-identity-dgvkt" event={"ID":"fc4541ce-7789-4670-bc75-5c2868e52ce0","Type":"ContainerStarted","Data":"57cffc0db9982aad5a52009204026eab02cc11b4cd305d26cd3ea7e2b11c4994"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.231261 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7xdw" event={"ID":"70f944bb-0390-45c1-914f-5389215db1cd","Type":"ContainerStarted","Data":"2ee96f4d14e54af55c30b158bbb62207aa46b2675985a445b3286d6eef1c6390"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.232355 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/node-resolver-qccfb" event={"ID":"586f9834-c55f-46f5-b903-ca1c6b6805b0","Type":"ContainerStarted","Data":"7cf7390f22b894fd90b9ae893a3f283f36e1bb51801da6182329e1914a698c9c"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.233760 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"9eccb5867c10071fdf750f409b575de5a94200dd6f5a5771855b57915e61d0d8"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.233805 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"c1ef916fc9ddc28b4cf6d39c794f56acc3529533f1acf6dc7245d743afdd645b"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.235523 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" event={"ID":"8d788132-7791-4db1-9057-4112a18f44fa","Type":"ContainerStarted","Data":"2159d23e539f3a8eebf8a266c8a1191c5d9d3a03b3228280c9be06074f58679a"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.235549 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" event={"ID":"8d788132-7791-4db1-9057-4112a18f44fa","Type":"ContainerStarted","Data":"b54f8abf9698c7d13f9379cc94bdb4312ebbc6b678c9c1231f59af21b597cd1a"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.236871 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" exitCode=0 Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.236928 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.239333 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/node-ca-b7cxh" event={"ID":"2fd20aa5-9c6b-4f89-ab4d-03adb84c34a8","Type":"ContainerStarted","Data":"11e04e05d2820558fb3cae57094ddba6783f28aad8a1f8db1b2735d2ddb10003"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.240522 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/network-operator-7bdcf4f5bd-7fjxv" event={"ID":"34177974-8d82-49d2-a763-391d0df3bbd8","Type":"ContainerStarted","Data":"d38c91c11421d31f7129d8b99baa309daeb1583138f5551e114899f69e1fbf4f"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.266186 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/openshift-kube-scheduler-crc" podStartSLOduration=2.266168635 podStartE2EDuration="2.266168635s" podCreationTimestamp="2026-01-23 08:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.265927058 +0000 UTC m=+98.022052094" watchObservedRunningTime="2026-01-23 08:54:46.266168635 +0000 UTC m=+98.022293681" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.304989 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.305032 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.305042 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.305058 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.305069 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.398112 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/kube-rbac-proxy-crio-crc" podStartSLOduration=2.398090125 podStartE2EDuration="2.398090125s" podCreationTimestamp="2026-01-23 08:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.397874209 +0000 UTC m=+98.153999235" watchObservedRunningTime="2026-01-23 08:54:46.398090125 +0000 UTC m=+98.154215151" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.401177 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.401342 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.401416 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.401539 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.401577 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.401826 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.401849 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.401862 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.401921 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:48.401903029 +0000 UTC m=+100.158028055 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.402146 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.402320 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:48.40227615 +0000 UTC m=+100.158401366 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.402501 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:54:48.402489425 +0000 UTC m=+100.158614621 (durationBeforeRetry 2s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.403465 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.403493 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.403504 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.403535 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.403567 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:48.403541544 +0000 UTC m=+100.159666560 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.403589 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:48.403582505 +0000 UTC m=+100.159707521 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.406615 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.406661 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.406675 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.406691 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.406702 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.459277 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd/etcd-crc" podStartSLOduration=2.459259709 podStartE2EDuration="2.459259709s" podCreationTimestamp="2026-01-23 08:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.457638115 +0000 UTC m=+98.213763161" watchObservedRunningTime="2026-01-23 08:54:46.459259709 +0000 UTC m=+98.215384735" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.504269 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.504416 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.504520 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs podName:ad3e9798-99d9-456f-b969-840508a6ac91 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:48.504497507 +0000 UTC m=+100.260622533 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs") pod "network-metrics-daemon-gcn4t" (UID: "ad3e9798-99d9-456f-b969-840508a6ac91") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.509197 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.509247 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.509260 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.509277 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.509288 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.532088 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager/kube-controller-manager-crc" podStartSLOduration=2.532072212 podStartE2EDuration="2.532072212s" podCreationTimestamp="2026-01-23 08:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.530640203 +0000 UTC m=+98.286765249" watchObservedRunningTime="2026-01-23 08:54:46.532072212 +0000 UTC m=+98.288197238" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.590800 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podStartSLOduration=77.590785049 podStartE2EDuration="1m17.590785049s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.590389838 +0000 UTC m=+98.346514864" watchObservedRunningTime="2026-01-23 08:54:46.590785049 +0000 UTC m=+98.346910065" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.611872 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.611904 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.611914 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.611927 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.611937 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.618906 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/node-ca-b7cxh" podStartSLOduration=77.618890118 podStartE2EDuration="1m17.618890118s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.618687892 +0000 UTC m=+98.374812918" watchObservedRunningTime="2026-01-23 08:54:46.618890118 +0000 UTC m=+98.375015144" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.670477 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" podStartSLOduration=77.670436009 podStartE2EDuration="1m17.670436009s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.669160354 +0000 UTC m=+98.425285380" watchObservedRunningTime="2026-01-23 08:54:46.670436009 +0000 UTC m=+98.426561035" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.679752 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/node-resolver-qccfb" podStartSLOduration=77.679734553 podStartE2EDuration="1m17.679734553s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.679424715 +0000 UTC m=+98.435549741" watchObservedRunningTime="2026-01-23 08:54:46.679734553 +0000 UTC m=+98.435859579" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.695919 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-g7xdw" podStartSLOduration=77.695897195 podStartE2EDuration="1m17.695897195s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:46.695569726 +0000 UTC m=+98.451694762" watchObservedRunningTime="2026-01-23 08:54:46.695897195 +0000 UTC m=+98.452022221" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.714254 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.714294 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.714314 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.714332 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.714346 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.769740 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.769885 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.769883 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.770021 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.770151 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.770191 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.770324 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:46 crc kubenswrapper[5117]: E0123 08:54:46.770435 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.774676 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="01080b46-74f1-4191-8755-5152a57b3b25" path="/var/lib/kubelet/pods/01080b46-74f1-4191-8755-5152a57b3b25/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.776089 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="09cfa50b-4138-4585-a53e-64dd3ab73335" path="/var/lib/kubelet/pods/09cfa50b-4138-4585-a53e-64dd3ab73335/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.777705 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0dd0fbac-8c0d-4228-8faa-abbeedabf7db" path="/var/lib/kubelet/pods/0dd0fbac-8c0d-4228-8faa-abbeedabf7db/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.801223 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0effdbcf-dd7d-404d-9d48-77536d665a5d" path="/var/lib/kubelet/pods/0effdbcf-dd7d-404d-9d48-77536d665a5d/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.803876 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="149b3c48-e17c-4a66-a835-d86dabf6ff13" path="/var/lib/kubelet/pods/149b3c48-e17c-4a66-a835-d86dabf6ff13/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.805762 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="16bdd140-dce1-464c-ab47-dd5798d1d256" path="/var/lib/kubelet/pods/16bdd140-dce1-464c-ab47-dd5798d1d256/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.807099 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="18f80adb-c1c3-49ba-8ee4-932c851d3897" path="/var/lib/kubelet/pods/18f80adb-c1c3-49ba-8ee4-932c851d3897/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.808041 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="20ce4d18-fe25-4696-ad7c-1bd2d6200a3e" path="/var/lib/kubelet/pods/20ce4d18-fe25-4696-ad7c-1bd2d6200a3e/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.809259 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2325ffef-9d5b-447f-b00e-3efc429acefe" path="/var/lib/kubelet/pods/2325ffef-9d5b-447f-b00e-3efc429acefe/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.810303 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="301e1965-1754-483d-b6cc-bfae7038bbca" path="/var/lib/kubelet/pods/301e1965-1754-483d-b6cc-bfae7038bbca/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.811649 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="31fa8943-81cc-4750-a0b7-0fa9ab5af883" path="/var/lib/kubelet/pods/31fa8943-81cc-4750-a0b7-0fa9ab5af883/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.813410 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="42a11a02-47e1-488f-b270-2679d3298b0e" path="/var/lib/kubelet/pods/42a11a02-47e1-488f-b270-2679d3298b0e/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.814202 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="567683bd-0efc-4f21-b076-e28559628404" path="/var/lib/kubelet/pods/567683bd-0efc-4f21-b076-e28559628404/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.815171 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="584e1f4a-8205-47d7-8efb-3afc6017c4c9" path="/var/lib/kubelet/pods/584e1f4a-8205-47d7-8efb-3afc6017c4c9/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.816459 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.816492 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.816463 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="593a3561-7760-45c5-8f91-5aaef7475d0f" path="/var/lib/kubelet/pods/593a3561-7760-45c5-8f91-5aaef7475d0f/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.816500 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.816663 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.816674 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.817645 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5ebfebf6-3ecd-458e-943f-bb25b52e2718" path="/var/lib/kubelet/pods/5ebfebf6-3ecd-458e-943f-bb25b52e2718/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.818626 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6077b63e-53a2-4f96-9d56-1ce0324e4913" path="/var/lib/kubelet/pods/6077b63e-53a2-4f96-9d56-1ce0324e4913/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.820299 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca" path="/var/lib/kubelet/pods/6a81eec9-f29e-49a0-a15a-f2f5bd2d95ca/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.821368 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6edfcf45-925b-4eff-b940-95b6fc0b85d4" path="/var/lib/kubelet/pods/6edfcf45-925b-4eff-b940-95b6fc0b85d4/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.822905 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6ee8fbd3-1f81-4666-96da-5afc70819f1a" path="/var/lib/kubelet/pods/6ee8fbd3-1f81-4666-96da-5afc70819f1a/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.823940 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a" path="/var/lib/kubelet/pods/71c8ffbe-59c6-4e7d-aa1a-bbd315b3414a/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.826153 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="736c54fe-349c-4bb9-870a-d1c1d1c03831" path="/var/lib/kubelet/pods/736c54fe-349c-4bb9-870a-d1c1d1c03831/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.827045 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7599e0b6-bddf-4def-b7f2-0b32206e8651" path="/var/lib/kubelet/pods/7599e0b6-bddf-4def-b7f2-0b32206e8651/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.828712 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7afa918d-be67-40a6-803c-d3b0ae99d815" path="/var/lib/kubelet/pods/7afa918d-be67-40a6-803c-d3b0ae99d815/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.829763 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7df94c10-441d-4386-93a6-6730fb7bcde0" path="/var/lib/kubelet/pods/7df94c10-441d-4386-93a6-6730fb7bcde0/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.831449 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7fcc6409-8a0f-44c3-89e7-5aecd7610f8a" path="/var/lib/kubelet/pods/7fcc6409-8a0f-44c3-89e7-5aecd7610f8a/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.833878 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="81e39f7b-62e4-4fc9-992a-6535ce127a02" path="/var/lib/kubelet/pods/81e39f7b-62e4-4fc9-992a-6535ce127a02/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.834654 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="869851b9-7ffb-4af0-b166-1d8aa40a5f80" path="/var/lib/kubelet/pods/869851b9-7ffb-4af0-b166-1d8aa40a5f80/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.837635 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff" path="/var/lib/kubelet/pods/9276f8f5-2f24-48e1-ab6d-1aab0d8ec3ff/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.838563 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="92dfbade-90b6-4169-8c07-72cff7f2c82b" path="/var/lib/kubelet/pods/92dfbade-90b6-4169-8c07-72cff7f2c82b/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.840437 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="94a6e063-3d1a-4d44-875d-185291448c31" path="/var/lib/kubelet/pods/94a6e063-3d1a-4d44-875d-185291448c31/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.841846 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9f71a554-e414-4bc3-96d2-674060397afe" path="/var/lib/kubelet/pods/9f71a554-e414-4bc3-96d2-674060397afe/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.843634 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a208c9c2-333b-4b4a-be0d-bc32ec38a821" path="/var/lib/kubelet/pods/a208c9c2-333b-4b4a-be0d-bc32ec38a821/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.844871 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a52afe44-fb37-46ed-a1f8-bf39727a3cbe" path="/var/lib/kubelet/pods/a52afe44-fb37-46ed-a1f8-bf39727a3cbe/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.845938 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a555ff2e-0be6-46d5-897d-863bb92ae2b3" path="/var/lib/kubelet/pods/a555ff2e-0be6-46d5-897d-863bb92ae2b3/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.846853 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a7a88189-c967-4640-879e-27665747f20c" path="/var/lib/kubelet/pods/a7a88189-c967-4640-879e-27665747f20c/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.848221 5117 kubelet_volumes.go:152] "Cleaned up orphaned volume subpath from pod" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volume-subpaths/run-systemd/ovnkube-controller/6" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.848528 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af33e427-6803-48c2-a76a-dd9deb7cbf9a" path="/var/lib/kubelet/pods/af33e427-6803-48c2-a76a-dd9deb7cbf9a/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.857160 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="af41de71-79cf-4590-bbe9-9e8b848862cb" path="/var/lib/kubelet/pods/af41de71-79cf-4590-bbe9-9e8b848862cb/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.858586 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a" path="/var/lib/kubelet/pods/b05a4c1d-fa93-4d3d-b6e5-235473e1ae2a/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.859543 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b4750666-1362-4001-abd0-6f89964cc621" path="/var/lib/kubelet/pods/b4750666-1362-4001-abd0-6f89964cc621/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.860898 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b605f283-6f2e-42da-a838-54421690f7d0" path="/var/lib/kubelet/pods/b605f283-6f2e-42da-a838-54421690f7d0/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.861548 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c491984c-7d4b-44aa-8c1e-d7974424fa47" path="/var/lib/kubelet/pods/c491984c-7d4b-44aa-8c1e-d7974424fa47/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.863063 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c5f2bfad-70f6-4185-a3d9-81ce12720767" path="/var/lib/kubelet/pods/c5f2bfad-70f6-4185-a3d9-81ce12720767/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.863916 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cc85e424-18b2-4924-920b-bd291a8c4b01" path="/var/lib/kubelet/pods/cc85e424-18b2-4924-920b-bd291a8c4b01/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.864469 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce090a97-9ab6-4c40-a719-64ff2acd9778" path="/var/lib/kubelet/pods/ce090a97-9ab6-4c40-a719-64ff2acd9778/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.869479 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d19cb085-0c5b-4810-b654-ce7923221d90" path="/var/lib/kubelet/pods/d19cb085-0c5b-4810-b654-ce7923221d90/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.870551 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d45be74c-0d98-4d18-90e4-f7ef1b6daaf7" path="/var/lib/kubelet/pods/d45be74c-0d98-4d18-90e4-f7ef1b6daaf7/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.872394 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d565531a-ff86-4608-9d19-767de01ac31b" path="/var/lib/kubelet/pods/d565531a-ff86-4608-9d19-767de01ac31b/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.873924 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d7e8f42f-dc0e-424b-bb56-5ec849834888" path="/var/lib/kubelet/pods/d7e8f42f-dc0e-424b-bb56-5ec849834888/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.874652 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9" path="/var/lib/kubelet/pods/dcd10325-9ba5-4a3b-8e4a-e57e3bf210f9/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.919742 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.919782 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.919795 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.919812 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.919825 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:46Z","lastTransitionTime":"2026-01-23T08:54:46Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.972842 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e093be35-bb62-4843-b2e8-094545761610" path="/var/lib/kubelet/pods/e093be35-bb62-4843-b2e8-094545761610/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.973980 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e1d2a42d-af1d-4054-9618-ab545e0ed8b7" path="/var/lib/kubelet/pods/e1d2a42d-af1d-4054-9618-ab545e0ed8b7/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.975152 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f559dfa3-3917-43a2-97f6-61ddfda10e93" path="/var/lib/kubelet/pods/f559dfa3-3917-43a2-97f6-61ddfda10e93/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.977117 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f65c0ac1-8bca-454d-a2e6-e35cb418beac" path="/var/lib/kubelet/pods/f65c0ac1-8bca-454d-a2e6-e35cb418beac/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.978658 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4" path="/var/lib/kubelet/pods/f7648cbb-48eb-4ba8-87ec-eb096b8fa1e4/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.980300 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7e2c886-118e-43bb-bef1-c78134de392b" path="/var/lib/kubelet/pods/f7e2c886-118e-43bb-bef1-c78134de392b/volumes" Jan 23 08:54:46 crc kubenswrapper[5117]: I0123 08:54:46.981376 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="fc8db2c7-859d-47b3-a900-2bd0c0b2973b" path="/var/lib/kubelet/pods/fc8db2c7-859d-47b3-a900-2bd0c0b2973b/volumes" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.022870 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.022909 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.022920 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.022934 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.022944 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.124923 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.124970 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.124985 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.125003 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.125016 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.228411 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.228473 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.228484 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.228503 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.228517 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.253351 5117 generic.go:358] "Generic (PLEG): container finished" podID="3fb9d7cb-4569-4674-b9bd-78ee34ca14a3" containerID="2cf775e0d0470b8bc272cace653eabc46c4fe8fcd0e3ea91e0b3bc4aea8de077" exitCode=0 Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.253498 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerDied","Data":"2cf775e0d0470b8bc272cace653eabc46c4fe8fcd0e3ea91e0b3bc4aea8de077"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.258705 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.258782 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.258801 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.258813 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.258825 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.258836 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.330698 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.330748 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.330761 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.330779 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.330791 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.434911 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.434957 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.434999 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.435019 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.435032 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.537557 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.537870 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.537883 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.537900 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.537913 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.623201 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientMemory" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.623251 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasNoDiskPressure" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.623263 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeHasSufficientPID" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.623279 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeNotReady" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.623292 5117 setters.go:618] "Node became not ready" node="crc" condition={"type":"Ready","status":"False","lastHeartbeatTime":"2026-01-23T08:54:47Z","lastTransitionTime":"2026-01-23T08:54:47Z","reason":"KubeletNotReady","message":"container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?"} Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.679778 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6"] Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.685893 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.688020 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.688493 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.688697 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.688823 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.756541 5117 certificate_manager.go:566] "Rotating certificates" logger="kubernetes.io/kubelet-serving" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.767070 5117 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.819371 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc30cda7-bfbd-4819-b025-5161672241d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.819562 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc30cda7-bfbd-4819-b025-5161672241d7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.819618 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc30cda7-bfbd-4819-b025-5161672241d7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.819705 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc30cda7-bfbd-4819-b025-5161672241d7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.819794 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc30cda7-bfbd-4819-b025-5161672241d7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.920979 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc30cda7-bfbd-4819-b025-5161672241d7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.921080 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc30cda7-bfbd-4819-b025-5161672241d7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.921111 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc30cda7-bfbd-4819-b025-5161672241d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.921196 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc30cda7-bfbd-4819-b025-5161672241d7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.921220 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc30cda7-bfbd-4819-b025-5161672241d7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.921310 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-cvo-updatepayloads\" (UniqueName: \"kubernetes.io/host-path/bc30cda7-bfbd-4819-b025-5161672241d7-etc-cvo-updatepayloads\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.921712 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-ssl-certs\" (UniqueName: \"kubernetes.io/host-path/bc30cda7-bfbd-4819-b025-5161672241d7-etc-ssl-certs\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.922232 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bc30cda7-bfbd-4819-b025-5161672241d7-service-ca\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.973769 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bc30cda7-bfbd-4819-b025-5161672241d7-serving-cert\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.974017 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/bc30cda7-bfbd-4819-b025-5161672241d7-kube-api-access\") pod \"cluster-version-operator-7c9b9cfd6-nsvh6\" (UID: \"bc30cda7-bfbd-4819-b025-5161672241d7\") " pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:47 crc kubenswrapper[5117]: I0123 08:54:47.997655 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" Jan 23 08:54:48 crc kubenswrapper[5117]: W0123 08:54:48.012450 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podbc30cda7_bfbd_4819_b025_5161672241d7.slice/crio-685b3d6bb87311d5062e498a52a436f54c562eed02692d8ea65692da7929c63a WatchSource:0}: Error finding container 685b3d6bb87311d5062e498a52a436f54c562eed02692d8ea65692da7929c63a: Status 404 returned error can't find the container with id 685b3d6bb87311d5062e498a52a436f54c562eed02692d8ea65692da7929c63a Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.262472 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" event={"ID":"bc30cda7-bfbd-4819-b025-5161672241d7","Type":"ContainerStarted","Data":"b71e21f050999e68d8e7e636d13603561e690494a177b36696b0271588efb805"} Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.262535 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" event={"ID":"bc30cda7-bfbd-4819-b025-5161672241d7","Type":"ContainerStarted","Data":"685b3d6bb87311d5062e498a52a436f54c562eed02692d8ea65692da7929c63a"} Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.264926 5117 generic.go:358] "Generic (PLEG): container finished" podID="3fb9d7cb-4569-4674-b9bd-78ee34ca14a3" containerID="cfd94277e8685e04119bce9a1fa151ba4ca6deea9002cba922a17c09cbd9a1e2" exitCode=0 Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.265064 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerDied","Data":"cfd94277e8685e04119bce9a1fa151ba4ca6deea9002cba922a17c09cbd9a1e2"} Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.277696 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-version/cluster-version-operator-7c9b9cfd6-nsvh6" podStartSLOduration=79.277667075 podStartE2EDuration="1m19.277667075s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:48.277028688 +0000 UTC m=+100.033153714" watchObservedRunningTime="2026-01-23 08:54:48.277667075 +0000 UTC m=+100.033792101" Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.427390 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.428100 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.428164 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428176 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428216 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:54:52.428201095 +0000 UTC m=+104.184326131 (durationBeforeRetry 4s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428286 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428305 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428315 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428355 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:52.428256847 +0000 UTC m=+104.184381873 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.428438 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:52.428423981 +0000 UTC m=+104.184549007 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.428802 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.428875 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.429232 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.429249 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.429256 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.429284 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:52.429275655 +0000 UTC m=+104.185400681 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.429800 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.429886 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:52.429867871 +0000 UTC m=+104.185992897 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.529794 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.529953 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.530019 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs podName:ad3e9798-99d9-456f-b969-840508a6ac91 nodeName:}" failed. No retries permitted until 2026-01-23 08:54:52.530002121 +0000 UTC m=+104.286127157 (durationBeforeRetry 4s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs") pod "network-metrics-daemon-gcn4t" (UID: "ad3e9798-99d9-456f-b969-840508a6ac91") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.771620 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.771681 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.771803 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.771889 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.771927 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:48 crc kubenswrapper[5117]: I0123 08:54:48.771909 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.772032 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:48 crc kubenswrapper[5117]: E0123 08:54:48.772105 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:49 crc kubenswrapper[5117]: I0123 08:54:49.269610 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-operator/iptables-alerter-5jnd7" event={"ID":"428b39f5-eb1c-4f65-b7a4-eeb6e84860cc","Type":"ContainerStarted","Data":"c45570c6914eb676ed10788625db3bf47822dce5526a104fd94e143dbf4ba453"} Jan 23 08:54:49 crc kubenswrapper[5117]: I0123 08:54:49.272793 5117 generic.go:358] "Generic (PLEG): container finished" podID="3fb9d7cb-4569-4674-b9bd-78ee34ca14a3" containerID="21a7111800ac95e05ec6610abdbff49065e237944ff5eedd848927e600c16ee8" exitCode=0 Jan 23 08:54:49 crc kubenswrapper[5117]: I0123 08:54:49.272845 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerDied","Data":"21a7111800ac95e05ec6610abdbff49065e237944ff5eedd848927e600c16ee8"} Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.279666 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.282474 5117 generic.go:358] "Generic (PLEG): container finished" podID="3fb9d7cb-4569-4674-b9bd-78ee34ca14a3" containerID="638780fce9f724ca85a4f4bc87e5aa4917d29968fb611c10d93e66b75d295512" exitCode=0 Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.282579 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerDied","Data":"638780fce9f724ca85a4f4bc87e5aa4917d29968fb611c10d93e66b75d295512"} Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.770303 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.770346 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:50 crc kubenswrapper[5117]: E0123 08:54:50.770471 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.770521 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:50 crc kubenswrapper[5117]: E0123 08:54:50.770627 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:50 crc kubenswrapper[5117]: E0123 08:54:50.770730 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:50 crc kubenswrapper[5117]: I0123 08:54:50.770777 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:50 crc kubenswrapper[5117]: E0123 08:54:50.770862 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.021169 5117 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.294760 5117 generic.go:358] "Generic (PLEG): container finished" podID="3fb9d7cb-4569-4674-b9bd-78ee34ca14a3" containerID="d549318d7dc0e2a2f0bccedd8a5cf2c1f71af1ca4a37ebccfbb92fb2f3e3eaa6" exitCode=0 Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.294848 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerDied","Data":"d549318d7dc0e2a2f0bccedd8a5cf2c1f71af1ca4a37ebccfbb92fb2f3e3eaa6"} Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.470264 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470454 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:00.470421611 +0000 UTC m=+112.226546647 (durationBeforeRetry 8s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.470589 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.470673 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470746 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.470771 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470833 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:00.470818952 +0000 UTC m=+112.226943998 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470845 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470961 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470971 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.470922 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.471120 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.471163 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:00.471149281 +0000 UTC m=+112.227274307 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.471285 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:00.471271754 +0000 UTC m=+112.227396780 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.471311 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.471490 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.471556 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.471665 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:00.471652395 +0000 UTC m=+112.227777421 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.572295 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.572432 5117 secret.go:189] Couldn't get secret openshift-multus/metrics-daemon-secret: object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.572507 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs podName:ad3e9798-99d9-456f-b969-840508a6ac91 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:00.572488954 +0000 UTC m=+112.328613990 (durationBeforeRetry 8s). Error: MountVolume.SetUp failed for volume "metrics-certs" (UniqueName: "kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs") pod "network-metrics-daemon-gcn4t" (UID: "ad3e9798-99d9-456f-b969-840508a6ac91") : object "openshift-multus"/"metrics-daemon-secret" not registered Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.770206 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.770253 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.770344 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.770652 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.770744 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:52 crc kubenswrapper[5117]: I0123 08:54:52.770768 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.770839 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:52 crc kubenswrapper[5117]: E0123 08:54:52.770916 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.303350 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerStarted","Data":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.304045 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.304063 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.304072 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.312618 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" event={"ID":"3fb9d7cb-4569-4674-b9bd-78ee34ca14a3","Type":"ContainerStarted","Data":"7dc2807905f88dc879cf36a48a66736dca8b1bdfba6790358bfbfd8b341b0868"} Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.353566 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podStartSLOduration=84.35354811 podStartE2EDuration="1m24.35354811s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:53.353359835 +0000 UTC m=+105.109484891" watchObservedRunningTime="2026-01-23 08:54:53.35354811 +0000 UTC m=+105.109673136" Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.411794 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:53 crc kubenswrapper[5117]: I0123 08:54:53.414378 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:54:54 crc kubenswrapper[5117]: I0123 08:54:54.343349 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-additional-cni-plugins-ggtdd" podStartSLOduration=85.343318828 podStartE2EDuration="1m25.343318828s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:54:54.342848836 +0000 UTC m=+106.098973932" watchObservedRunningTime="2026-01-23 08:54:54.343318828 +0000 UTC m=+106.099443884" Jan 23 08:54:54 crc kubenswrapper[5117]: I0123 08:54:54.769896 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:54 crc kubenswrapper[5117]: I0123 08:54:54.770121 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:54 crc kubenswrapper[5117]: E0123 08:54:54.770359 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:54 crc kubenswrapper[5117]: I0123 08:54:54.770171 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:54 crc kubenswrapper[5117]: I0123 08:54:54.770150 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:54 crc kubenswrapper[5117]: E0123 08:54:54.770436 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:54 crc kubenswrapper[5117]: E0123 08:54:54.770484 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:54 crc kubenswrapper[5117]: E0123 08:54:54.770562 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:55 crc kubenswrapper[5117]: I0123 08:54:55.348762 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gcn4t"] Jan 23 08:54:55 crc kubenswrapper[5117]: I0123 08:54:55.348871 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:55 crc kubenswrapper[5117]: E0123 08:54:55.348956 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:56 crc kubenswrapper[5117]: I0123 08:54:56.775290 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:56 crc kubenswrapper[5117]: I0123 08:54:56.775319 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:56 crc kubenswrapper[5117]: E0123 08:54:56.775451 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:56 crc kubenswrapper[5117]: E0123 08:54:56.775572 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:56 crc kubenswrapper[5117]: I0123 08:54:56.776785 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:54:56 crc kubenswrapper[5117]: I0123 08:54:56.776840 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:56 crc kubenswrapper[5117]: I0123 08:54:56.776889 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:56 crc kubenswrapper[5117]: E0123 08:54:56.777063 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:56 crc kubenswrapper[5117]: E0123 08:54:56.777196 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:56 crc kubenswrapper[5117]: E0123 08:54:56.777201 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"kube-apiserver-check-endpoints\" with CrashLoopBackOff: \"back-off 40s restarting failed container=kube-apiserver-check-endpoints pod=kube-apiserver-crc_openshift-kube-apiserver(3a14caf222afb62aaabdc47808b6f944)\"" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" Jan 23 08:54:58 crc kubenswrapper[5117]: I0123 08:54:58.772233 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:54:58 crc kubenswrapper[5117]: E0123 08:54:58.772339 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-target-fhkjl" podUID="17b87002-b798-480a-8e17-83053d698239" Jan 23 08:54:58 crc kubenswrapper[5117]: I0123 08:54:58.772409 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:54:58 crc kubenswrapper[5117]: E0123 08:54:58.772544 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-multus/network-metrics-daemon-gcn4t" podUID="ad3e9798-99d9-456f-b969-840508a6ac91" Jan 23 08:54:58 crc kubenswrapper[5117]: I0123 08:54:58.772625 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:54:58 crc kubenswrapper[5117]: I0123 08:54:58.772729 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:54:58 crc kubenswrapper[5117]: E0123 08:54:58.772827 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" podUID="6a9ae5f6-97bd-46ac-bafa-ca1b4452a141" Jan 23 08:54:58 crc kubenswrapper[5117]: E0123 08:54:58.772941 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="network is not ready: container runtime network not ready: NetworkReady=false reason:NetworkPluginNotReady message:Network plugin returns error: no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" podUID="f863fff9-286a-45fa-b8f0-8a86994b8440" Jan 23 08:54:59 crc kubenswrapper[5117]: I0123 08:54:59.522737 5117 kubelet_node_status.go:736] "Recording event message for node" node="crc" event="NodeReady" Jan 23 08:54:59 crc kubenswrapper[5117]: I0123 08:54:59.522923 5117 kubelet_node_status.go:550] "Fast updating node status as it just became ready" Jan 23 08:54:59 crc kubenswrapper[5117]: I0123 08:54:59.555901 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-p4kn6"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.562921 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563043 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:16.563024168 +0000 UTC m=+128.319149194 (durationBeforeRetry 16s). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.563089 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.563113 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.563157 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.563195 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563313 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563325 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563334 5117 projected.go:194] Error preparing data for projected volume kube-api-access-gwt8b for pod openshift-network-diagnostics/network-check-target-fhkjl: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563369 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b podName:17b87002-b798-480a-8e17-83053d698239 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:16.563361937 +0000 UTC m=+128.319486963 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-gwt8b" (UniqueName: "kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b") pod "network-check-target-fhkjl" (UID: "17b87002-b798-480a-8e17-83053d698239") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563742 5117 secret.go:189] Couldn't get secret openshift-network-console/networking-console-plugin-cert: object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563769 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:16.563761778 +0000 UTC m=+128.319886804 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "networking-console-plugin-cert" (UniqueName: "kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin-cert" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563806 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/kube-root-ca.crt: object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563813 5117 projected.go:289] Couldn't get configMap openshift-network-diagnostics/openshift-service-ca.crt: object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563820 5117 projected.go:194] Error preparing data for projected volume kube-api-access-l7w75 for pod openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5: [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563840 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75 podName:f863fff9-286a-45fa-b8f0-8a86994b8440 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:16.56383497 +0000 UTC m=+128.319959996 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "kube-api-access-l7w75" (UniqueName: "kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75") pod "network-check-source-5bb8f5cd97-xdvz5" (UID: "f863fff9-286a-45fa-b8f0-8a86994b8440") : [object "openshift-network-diagnostics"/"kube-root-ca.crt" not registered, object "openshift-network-diagnostics"/"openshift-service-ca.crt" not registered] Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563866 5117 configmap.go:193] Couldn't get configMap openshift-network-console/networking-console-plugin: object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: E0123 08:55:00.563885 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf podName:6a9ae5f6-97bd-46ac-bafa-ca1b4452a141 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:16.563879842 +0000 UTC m=+128.320004868 (durationBeforeRetry 16s). Error: MountVolume.SetUp failed for volume "nginx-conf" (UniqueName: "kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf") pod "networking-console-plugin-5ff7774fd9-nljh6" (UID: "6a9ae5f6-97bd-46ac-bafa-ca1b4452a141") : object "openshift-network-console"/"networking-console-plugin" not registered Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.566669 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6cjff"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.566878 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.571303 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.571908 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.572208 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.572409 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.572689 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.573318 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.573404 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.573440 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.573713 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.573794 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.579550 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.600782 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-gr9mz"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.600963 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.600986 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.601109 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.601119 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.601432 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.605151 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.605421 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.605727 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.605808 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.607709 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.607765 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.607713 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.607888 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.607898 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.607999 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.608079 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.608400 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.615629 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.640192 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-24k6c"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.640663 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.645582 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.645677 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.645748 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.645835 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.645965 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.646189 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.655798 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.655964 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.658534 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661561 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661573 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661581 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661654 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661668 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661763 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661668 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.661764 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663623 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-encryption-config\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663677 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-config\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663701 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kp4bm\" (UniqueName: \"kubernetes.io/projected/bf50dcce-4975-442b-980a-da2e9b937d0e-kube-api-access-kp4bm\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663727 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-audit\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663751 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-serving-cert\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663811 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.663954 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-etcd-client\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664084 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664122 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f1e5bf-d4d4-4e48-8973-9645b408072b-serving-cert\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664221 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664271 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07f1e5bf-d4d4-4e48-8973-9645b408072b-tmp\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664327 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-client-ca\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664381 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf50dcce-4975-442b-980a-da2e9b937d0e-node-pullsecrets\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664613 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664681 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-image-import-ca\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664784 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2tphm\" (UniqueName: \"kubernetes.io/projected/07f1e5bf-d4d4-4e48-8973-9645b408072b-kube-api-access-2tphm\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664909 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf50dcce-4975-442b-980a-da2e9b937d0e-audit-dir\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.664981 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-config\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.670351 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/ad3e9798-99d9-456f-b969-840508a6ac91-metrics-certs\") pod \"network-metrics-daemon-gcn4t\" (UID: \"ad3e9798-99d9-456f-b969-840508a6ac91\") " pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.698628 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-machine-approver/machine-approver-54c688565-grmnk"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.698808 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.705836 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.705888 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.705840 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.706296 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.706871 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.707739 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.766737 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n7spf\" (UniqueName: \"kubernetes.io/projected/1f4bb843-1670-4180-9fe6-43c005930de0-kube-api-access-n7spf\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.766785 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4bb843-1670-4180-9fe6-43c005930de0-config\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.766806 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-etcd-client\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.766925 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.766971 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f1e5bf-d4d4-4e48-8973-9645b408072b-serving-cert\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.766995 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-trusted-ca-bundle\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767034 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fbeda70-518a-438d-9cd2-3d6d01898eaa-audit-dir\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767097 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-config\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767167 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-audit-policies\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767188 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767753 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-etcd-serving-ca\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767206 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-serving-cert\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767832 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07f1e5bf-d4d4-4e48-8973-9645b408072b-tmp\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767858 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-client-ca\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767878 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-etcd-serving-ca\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767906 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf50dcce-4975-442b-980a-da2e9b937d0e-node-pullsecrets\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767925 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767941 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1f4bb843-1670-4180-9fe6-43c005930de0-images\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767956 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5t8kl\" (UniqueName: \"kubernetes.io/projected/4fbeda70-518a-438d-9cd2-3d6d01898eaa-kube-api-access-5t8kl\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767979 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2da5d032-059b-467c-89ec-6d81958200ca-serving-cert\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.767995 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-image-import-ca\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768017 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2tphm\" (UniqueName: \"kubernetes.io/projected/07f1e5bf-d4d4-4e48-8973-9645b408072b-kube-api-access-2tphm\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768033 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f4bb843-1670-4180-9fe6-43c005930de0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768048 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf50dcce-4975-442b-980a-da2e9b937d0e-audit-dir\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768062 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-config\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768080 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2da5d032-059b-467c-89ec-6d81958200ca-tmp\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768096 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pvwmj\" (UniqueName: \"kubernetes.io/projected/2da5d032-059b-467c-89ec-6d81958200ca-kube-api-access-pvwmj\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768112 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-encryption-config\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768114 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-trusted-ca-bundle\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768156 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-config\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768176 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kp4bm\" (UniqueName: \"kubernetes.io/projected/bf50dcce-4975-442b-980a-da2e9b937d0e-kube-api-access-kp4bm\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768192 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-encryption-config\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768209 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-audit\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768222 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-serving-cert\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768237 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-client-ca\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768263 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-etcd-client\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768468 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07f1e5bf-d4d4-4e48-8973-9645b408072b-tmp\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768520 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bf50dcce-4975-442b-980a-da2e9b937d0e-node-pullsecrets\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.768771 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/bf50dcce-4975-442b-980a-da2e9b937d0e-audit-dir\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.769508 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-config\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.770063 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-import-ca\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-image-import-ca\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.770801 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit\" (UniqueName: \"kubernetes.io/configmap/bf50dcce-4975-442b-980a-da2e9b937d0e-audit\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.771741 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-config\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.771813 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f1e5bf-d4d4-4e48-8973-9645b408072b-serving-cert\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.771880 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-client-ca\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.771959 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-proxy-ca-bundles\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.772682 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-etcd-client\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.774718 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-encryption-config\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.776340 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/bf50dcce-4975-442b-980a-da2e9b937d0e-serving-cert\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.785836 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2tphm\" (UniqueName: \"kubernetes.io/projected/07f1e5bf-d4d4-4e48-8973-9645b408072b-kube-api-access-2tphm\") pod \"controller-manager-65b6cccf98-6cjff\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.785993 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kp4bm\" (UniqueName: \"kubernetes.io/projected/bf50dcce-4975-442b-980a-da2e9b937d0e-kube-api-access-kp4bm\") pod \"apiserver-9ddfb9f55-p4kn6\" (UID: \"bf50dcce-4975-442b-980a-da2e9b937d0e\") " pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.791301 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.791443 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.793237 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.793896 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.794707 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.795429 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.795588 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.795597 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869297 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pvwmj\" (UniqueName: \"kubernetes.io/projected/2da5d032-059b-467c-89ec-6d81958200ca-kube-api-access-pvwmj\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869363 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b258694-fb3a-4669-bd71-93214f0695c6-config\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869402 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-encryption-config\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869428 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-client-ca\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869457 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n7spf\" (UniqueName: \"kubernetes.io/projected/1f4bb843-1670-4180-9fe6-43c005930de0-kube-api-access-n7spf\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869486 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4bb843-1670-4180-9fe6-43c005930de0-config\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.869958 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-etcd-client\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870064 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-trusted-ca-bundle\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870115 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fbeda70-518a-438d-9cd2-3d6d01898eaa-audit-dir\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870195 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-config\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870235 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-audit-policies\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870266 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b258694-fb3a-4669-bd71-93214f0695c6-auth-proxy-config\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870589 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/4fbeda70-518a-438d-9cd2-3d6d01898eaa-audit-dir\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870621 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-serving-cert\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870686 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-client-ca\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870688 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-etcd-serving-ca\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870798 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b258694-fb3a-4669-bd71-93214f0695c6-machine-approver-tls\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870825 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1f4bb843-1670-4180-9fe6-43c005930de0-images\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870848 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5t8kl\" (UniqueName: \"kubernetes.io/projected/4fbeda70-518a-438d-9cd2-3d6d01898eaa-kube-api-access-5t8kl\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870883 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2da5d032-059b-467c-89ec-6d81958200ca-serving-cert\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870915 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f4bb843-1670-4180-9fe6-43c005930de0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870936 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jrdrl\" (UniqueName: \"kubernetes.io/projected/3b258694-fb3a-4669-bd71-93214f0695c6-kube-api-access-jrdrl\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.870978 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2da5d032-059b-467c-89ec-6d81958200ca-tmp\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.872178 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-audit-policies\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.872245 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-trusted-ca-bundle\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.872746 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/1f4bb843-1670-4180-9fe6-43c005930de0-config\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.872813 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-config\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.873114 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2da5d032-059b-467c-89ec-6d81958200ca-tmp\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.873247 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/1f4bb843-1670-4180-9fe6-43c005930de0-images\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.874876 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"encryption-config\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-encryption-config\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.875472 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-serving-cert\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.875983 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2da5d032-059b-467c-89ec-6d81958200ca-serving-cert\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.876334 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-api-operator-tls\" (UniqueName: \"kubernetes.io/secret/1f4bb843-1670-4180-9fe6-43c005930de0-machine-api-operator-tls\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.876454 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/4fbeda70-518a-438d-9cd2-3d6d01898eaa-etcd-client\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.885496 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-serving-ca\" (UniqueName: \"kubernetes.io/configmap/4fbeda70-518a-438d-9cd2-3d6d01898eaa-etcd-serving-ca\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.885712 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.888517 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pvwmj\" (UniqueName: \"kubernetes.io/projected/2da5d032-059b-467c-89ec-6d81958200ca-kube-api-access-pvwmj\") pod \"route-controller-manager-776cdc94d6-brg6c\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.889785 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5t8kl\" (UniqueName: \"kubernetes.io/projected/4fbeda70-518a-438d-9cd2-3d6d01898eaa-kube-api-access-5t8kl\") pod \"apiserver-8596bd845d-24k6c\" (UID: \"4fbeda70-518a-438d-9cd2-3d6d01898eaa\") " pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.892911 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n7spf\" (UniqueName: \"kubernetes.io/projected/1f4bb843-1670-4180-9fe6-43c005930de0-kube-api-access-n7spf\") pod \"machine-api-operator-755bb95488-gr9mz\" (UID: \"1f4bb843-1670-4180-9fe6-43c005930de0\") " pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.932540 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.939552 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/network-metrics-daemon-gcn4t" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.953271 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console-operator/console-operator-67c89758df-trv9s"] Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.953475 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.955763 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.955979 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.957214 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.957467 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.957477 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.962643 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.971216 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.972232 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b258694-fb3a-4669-bd71-93214f0695c6-config\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.972325 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b258694-fb3a-4669-bd71-93214f0695c6-auth-proxy-config\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.972363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b258694-fb3a-4669-bd71-93214f0695c6-machine-approver-tls\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.972392 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jrdrl\" (UniqueName: \"kubernetes.io/projected/3b258694-fb3a-4669-bd71-93214f0695c6-kube-api-access-jrdrl\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.972966 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/3b258694-fb3a-4669-bd71-93214f0695c6-config\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.973413 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/3b258694-fb3a-4669-bd71-93214f0695c6-auth-proxy-config\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.984381 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"machine-approver-tls\" (UniqueName: \"kubernetes.io/secret/3b258694-fb3a-4669-bd71-93214f0695c6-machine-approver-tls\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:00 crc kubenswrapper[5117]: I0123 08:55:00.991083 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jrdrl\" (UniqueName: \"kubernetes.io/projected/3b258694-fb3a-4669-bd71-93214f0695c6-kube-api-access-jrdrl\") pod \"machine-approver-54c688565-grmnk\" (UID: \"3b258694-fb3a-4669-bd71-93214f0695c6\") " pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.024442 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.073723 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195fafe4-a2c8-4665-88ea-f3540c149607-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.073810 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtknw\" (UniqueName: \"kubernetes.io/projected/195fafe4-a2c8-4665-88ea-f3540c149607-kube-api-access-mtknw\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.073841 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195fafe4-a2c8-4665-88ea-f3540c149607-config\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.112584 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" Jan 23 08:55:01 crc kubenswrapper[5117]: W0123 08:55:01.123224 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3b258694_fb3a_4669_bd71_93214f0695c6.slice/crio-efe43e906e913445a4e216d56de41aecc442ddc173a175cf553b1862d622ca12 WatchSource:0}: Error finding container efe43e906e913445a4e216d56de41aecc442ddc173a175cf553b1862d622ca12: Status 404 returned error can't find the container with id efe43e906e913445a4e216d56de41aecc442ddc173a175cf553b1862d622ca12 Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.169666 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.172155 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.172715 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.172712 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.173511 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.173860 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.174269 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195fafe4-a2c8-4665-88ea-f3540c149607-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.174320 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtknw\" (UniqueName: \"kubernetes.io/projected/195fafe4-a2c8-4665-88ea-f3540c149607-kube-api-access-mtknw\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.174350 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195fafe4-a2c8-4665-88ea-f3540c149607-config\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.174884 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/195fafe4-a2c8-4665-88ea-f3540c149607-config\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.181258 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-plxfl"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.181808 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.183962 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/195fafe4-a2c8-4665-88ea-f3540c149607-serving-cert\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.184374 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-45854"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.187353 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5dggj"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.187767 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.189551 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.195584 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.196214 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.196453 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.196625 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.197297 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.197614 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.197847 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.198058 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.199279 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.199524 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.199565 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.199770 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.205849 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtknw\" (UniqueName: \"kubernetes.io/projected/195fafe4-a2c8-4665-88ea-f3540c149607-kube-api-access-mtknw\") pod \"openshift-apiserver-operator-846cbfc458-cs5tv\" (UID: \"195fafe4-a2c8-4665-88ea-f3540c149607\") " pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.205941 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.210751 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.211049 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.211354 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.211536 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.211662 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.211764 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.211822 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.212172 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.212295 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.212617 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.212741 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.213906 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.246471 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.247252 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.255035 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.256902 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.264298 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/console-64d44f6ddf-mkrsr"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.264410 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.264471 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.265978 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.267630 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.268573 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.268796 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.269392 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.269648 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.270683 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.270841 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.271037 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.271264 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.272449 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.272628 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275561 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rxhbr\" (UniqueName: \"kubernetes.io/projected/92a6e0b4-4edc-47e6-8096-74e02438118b-kube-api-access-rxhbr\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275623 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275662 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275695 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275726 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cef46b3-a580-4159-a021-b878a03e1e39-serving-cert\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275754 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cef46b3-a580-4159-a021-b878a03e1e39-trusted-ca\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275780 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275812 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275843 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275894 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275935 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92a6e0b4-4edc-47e6-8096-74e02438118b-serving-cert\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275960 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-dir\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.275991 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qj4zb\" (UniqueName: \"kubernetes.io/projected/f399bc34-e976-4cd2-90df-40a3d08fb983-kube-api-access-qj4zb\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276021 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276049 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mtchx\" (UniqueName: \"kubernetes.io/projected/5cef46b3-a580-4159-a021-b878a03e1e39-kube-api-access-mtchx\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276074 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276153 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276192 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef46b3-a580-4159-a021-b878a03e1e39-config\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276221 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-config\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276247 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276274 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbb2477-dcc9-41a1-9f94-638266460dc2-serving-cert\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276299 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-policies\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276325 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276367 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5fbb2477-dcc9-41a1-9f94-638266460dc2-available-featuregates\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276395 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.276431 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ljd6n\" (UniqueName: \"kubernetes.io/projected/5fbb2477-dcc9-41a1-9f94-638266460dc2-kube-api-access-ljd6n\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.279596 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.279705 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.279596 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.279703 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.279827 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.279987 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.280395 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.280598 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-jjphf"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.280808 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.281005 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.286991 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-console/downloads-747b44746d-lr6gt"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.288621 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.288931 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.292380 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.294341 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.294574 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.298610 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.299796 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.301701 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.302683 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.310650 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.329672 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-z57tz"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.329777 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.329912 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.349710 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.369240 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-gr9mz"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.369427 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.369564 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" event={"ID":"3b258694-fb3a-4669-bd71-93214f0695c6","Type":"ContainerStarted","Data":"efe43e906e913445a4e216d56de41aecc442ddc173a175cf553b1862d622ca12"} Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.370252 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.370257 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.375204 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress/router-default-68cf44c8b8-mlq9j"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.375532 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377490 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5fbb2477-dcc9-41a1-9f94-638266460dc2-available-featuregates\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377531 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377561 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377585 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/471904d8-f887-45d3-89e4-efd24d2f7ab1-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377806 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377866 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ljd6n\" (UniqueName: \"kubernetes.io/projected/5fbb2477-dcc9-41a1-9f94-638266460dc2-kube-api-access-ljd6n\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377897 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n9v29\" (UniqueName: \"kubernetes.io/projected/5a2a6f4b-477c-495b-a191-95a213405e0d-kube-api-access-n9v29\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377933 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rxhbr\" (UniqueName: \"kubernetes.io/projected/92a6e0b4-4edc-47e6-8096-74e02438118b-kube-api-access-rxhbr\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377955 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.377979 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378002 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378281 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378313 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cef46b3-a580-4159-a021-b878a03e1e39-serving-cert\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378340 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mvwj9\" (UniqueName: \"kubernetes.io/projected/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-kube-api-access-mvwj9\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378362 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cef46b3-a580-4159-a021-b878a03e1e39-trusted-ca\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378386 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378409 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5a2a6f4b-477c-495b-a191-95a213405e0d-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378439 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378461 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-g6ddt\" (UniqueName: \"kubernetes.io/projected/047f668a-55a2-4033-bd74-8df8d6ffd36f-kube-api-access-g6ddt\") pod \"cluster-samples-operator-6b564684c8-fvslw\" (UID: \"047f668a-55a2-4033-bd74-8df8d6ffd36f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.378949 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-thh6t\" (UniqueName: \"kubernetes.io/projected/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-kube-api-access-thh6t\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379039 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379094 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379277 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-serving-cert\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379359 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-config\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379439 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92a6e0b4-4edc-47e6-8096-74e02438118b-serving-cert\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379568 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-dir\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379607 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-trusted-ca-bundle\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379731 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-dir\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379790 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"available-featuregates\" (UniqueName: \"kubernetes.io/empty-dir/5fbb2477-dcc9-41a1-9f94-638266460dc2-available-featuregates\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.379965 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xdmvc\" (UniqueName: \"kubernetes.io/projected/bdb7201e-da3c-4c31-9cae-a139067e3a83-kube-api-access-xdmvc\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.380200 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-service-ca\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.380447 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-service-ca\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381001 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-cliconfig\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.380562 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a980473e-0fae-4e77-a49b-26eb41252303-tmp-dir\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381305 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-76rp4\" (UniqueName: \"kubernetes.io/projected/a980473e-0fae-4e77-a49b-26eb41252303-kube-api-access-76rp4\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381352 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qj4zb\" (UniqueName: \"kubernetes.io/projected/f399bc34-e976-4cd2-90df-40a3d08fb983-kube-api-access-qj4zb\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381447 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/5cef46b3-a580-4159-a021-b878a03e1e39-trusted-ca\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381633 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381745 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-service-ca-bundle\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381830 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381838 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.381996 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/471904d8-f887-45d3-89e4-efd24d2f7ab1-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382063 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382088 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mtchx\" (UniqueName: \"kubernetes.io/projected/5cef46b3-a580-4159-a021-b878a03e1e39-kube-api-access-mtchx\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382108 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382156 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382180 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a2a6f4b-477c-495b-a191-95a213405e0d-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382202 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382298 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382324 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a980473e-0fae-4e77-a49b-26eb41252303-metrics-tls\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382368 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2a6f4b-477c-495b-a191-95a213405e0d-config\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382399 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-oauth-serving-cert\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382423 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/047f668a-55a2-4033-bd74-8df8d6ffd36f-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-fvslw\" (UID: \"047f668a-55a2-4033-bd74-8df8d6ffd36f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382444 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/471904d8-f887-45d3-89e4-efd24d2f7ab1-config\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382463 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382485 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382599 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef46b3-a580-4159-a021-b878a03e1e39-config\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382625 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-config\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382648 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382669 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-oauth-config\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382688 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rvgdv\" (UniqueName: \"kubernetes.io/projected/5c20ff45-85a7-456d-90ae-85845c4aec43-kube-api-access-rvgdv\") pod \"downloads-747b44746d-lr6gt\" (UID: \"5c20ff45-85a7-456d-90ae-85845c4aec43\") " pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382710 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbb2477-dcc9-41a1-9f94-638266460dc2-serving-cert\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382728 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-policies\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382749 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/471904d8-f887-45d3-89e4-efd24d2f7ab1-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.382771 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.383602 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5cef46b3-a580-4159-a021-b878a03e1e39-config\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.383925 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-config\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.384226 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-login\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.384497 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-policies\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.384581 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/92a6e0b4-4edc-47e6-8096-74e02438118b-trusted-ca-bundle\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.386192 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-serving-cert\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.386966 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.387077 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-error\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.387487 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/92a6e0b4-4edc-47e6-8096-74e02438118b-serving-cert\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.387608 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-session\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.388375 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5fbb2477-dcc9-41a1-9f94-638266460dc2-serving-cert\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.389880 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5cef46b3-a580-4159-a021-b878a03e1e39-serving-cert\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.390629 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.390730 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.391056 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-router-certs\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.399262 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.399435 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.402383 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.402873 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.410480 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.411635 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.411747 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.416401 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.416556 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.426261 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6cjff"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.426344 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.428852 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.431037 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.432511 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t67z9"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.432694 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.438350 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.438402 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.449801 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.453455 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.455212 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.459088 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.459201 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.459697 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.472797 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca/service-ca-74545575db-mtbpp"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.473752 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.480473 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.480965 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.480978 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483641 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rvgdv\" (UniqueName: \"kubernetes.io/projected/5c20ff45-85a7-456d-90ae-85845c4aec43-kube-api-access-rvgdv\") pod \"downloads-747b44746d-lr6gt\" (UID: \"5c20ff45-85a7-456d-90ae-85845c4aec43\") " pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483676 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf7dbe39-9107-49a2-ac48-8264fe632e4d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483704 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/471904d8-f887-45d3-89e4-efd24d2f7ab1-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483720 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc9ea72b-c9a6-4fbc-873a-133709e45add-service-ca-bundle\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483750 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qtbx6\" (UniqueName: \"kubernetes.io/projected/4356887a-acbc-4cf4-8fe3-4cea5a46a05a-kube-api-access-qtbx6\") pod \"migrator-866fcbc849-h2827\" (UID: \"4356887a-acbc-4cf4-8fe3-4cea5a46a05a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483767 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-tmpfs\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483781 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1609f854-4b62-4513-8fda-097c55c22a43-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483797 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-etcd-ca\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483816 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6a6d13-d350-414a-8d19-266287758441-serving-cert\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483834 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf7dbe39-9107-49a2-ac48-8264fe632e4d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483849 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf7dbe39-9107-49a2-ac48-8264fe632e4d-images\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483870 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-g6ddt\" (UniqueName: \"kubernetes.io/projected/047f668a-55a2-4033-bd74-8df8d6ffd36f-kube-api-access-g6ddt\") pod \"cluster-samples-operator-6b564684c8-fvslw\" (UID: \"047f668a-55a2-4033-bd74-8df8d6ffd36f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483886 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-thh6t\" (UniqueName: \"kubernetes.io/projected/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-kube-api-access-thh6t\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483902 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483917 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bmlc4\" (UniqueName: \"kubernetes.io/projected/cf7dbe39-9107-49a2-ac48-8264fe632e4d-kube-api-access-bmlc4\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483939 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-76rp4\" (UniqueName: \"kubernetes.io/projected/a980473e-0fae-4e77-a49b-26eb41252303-kube-api-access-76rp4\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483957 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c6337181-3803-4b34-bd4d-ab52efa11902-tmpfs\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483972 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.483990 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.484010 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2a6f4b-477c-495b-a191-95a213405e0d-config\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.484025 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-stats-auth\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.484042 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tqd8w\" (UniqueName: \"kubernetes.io/projected/1609f854-4b62-4513-8fda-097c55c22a43-kube-api-access-tqd8w\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.484060 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6337181-3803-4b34-bd4d-ab52efa11902-srv-cert\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.484892 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/471904d8-f887-45d3-89e4-efd24d2f7ab1-tmp-dir\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.484953 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/04eba658-5e10-4d58-9238-75ce437d7bec-webhook-certs\") pod \"multus-admission-controller-69db94689b-z57tz\" (UID: \"04eba658-5e10-4d58-9238-75ce437d7bec\") " pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.485003 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/047f668a-55a2-4033-bd74-8df8d6ffd36f-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-fvslw\" (UID: \"047f668a-55a2-4033-bd74-8df8d6ffd36f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.485083 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-config\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.485260 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-oauth-config\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486000 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-metrics-certs\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486089 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486120 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486173 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/5a2a6f4b-477c-495b-a191-95a213405e0d-config\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486369 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/471904d8-f887-45d3-89e4-efd24d2f7ab1-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486396 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r9nz4\" (UniqueName: \"kubernetes.io/projected/7d6a6d13-d350-414a-8d19-266287758441-kube-api-access-r9nz4\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486473 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n9v29\" (UniqueName: \"kubernetes.io/projected/5a2a6f4b-477c-495b-a191-95a213405e0d-kube-api-access-n9v29\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486555 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486606 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6337181-3803-4b34-bd4d-ab52efa11902-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486634 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d6a6d13-d350-414a-8d19-266287758441-tmp-dir\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486672 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mvwj9\" (UniqueName: \"kubernetes.io/projected/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-kube-api-access-mvwj9\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486687 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d6a6d13-d350-414a-8d19-266287758441-etcd-client\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486709 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5a2a6f4b-477c-495b-a191-95a213405e0d-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486748 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-config\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486765 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-serving-cert\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486781 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xdmvc\" (UniqueName: \"kubernetes.io/projected/bdb7201e-da3c-4c31-9cae-a139067e3a83-kube-api-access-xdmvc\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486796 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-service-ca\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486810 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a980473e-0fae-4e77-a49b-26eb41252303-tmp-dir\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486824 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1609f854-4b62-4513-8fda-097c55c22a43-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486844 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-trusted-ca-bundle\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.486860 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/471904d8-f887-45d3-89e4-efd24d2f7ab1-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487019 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d8bnr\" (UniqueName: \"kubernetes.io/projected/04eba658-5e10-4d58-9238-75ce437d7bec-kube-api-access-d8bnr\") pod \"multus-admission-controller-69db94689b-z57tz\" (UID: \"04eba658-5e10-4d58-9238-75ce437d7bec\") " pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487061 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wn7hh\" (UniqueName: \"kubernetes.io/projected/c6337181-3803-4b34-bd4d-ab52efa11902-kube-api-access-wn7hh\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487079 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-default-certificate\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487101 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a2a6f4b-477c-495b-a191-95a213405e0d-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487119 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7mpws\" (UniqueName: \"kubernetes.io/projected/bc9ea72b-c9a6-4fbc-873a-133709e45add-kube-api-access-7mpws\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487165 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8jkw7\" (UniqueName: \"kubernetes.io/projected/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-kube-api-access-8jkw7\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487193 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487219 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a980473e-0fae-4e77-a49b-26eb41252303-metrics-tls\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487234 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-srv-cert\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487251 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/471904d8-f887-45d3-89e4-efd24d2f7ab1-config\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487324 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-config\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487842 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/5a2a6f4b-477c-495b-a191-95a213405e0d-tmp\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.488204 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted-pem\" (UniqueName: \"kubernetes.io/empty-dir/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-ca-trust-extracted-pem\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.487267 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-oauth-serving-cert\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.488296 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-profile-collector-cert\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.488331 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.489470 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-config\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-config\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.489546 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-service-ca\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.488607 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-tmp\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.489657 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"oauth-serving-cert\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-oauth-serving-cert\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.490153 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-serving-cert\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.490658 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/a980473e-0fae-4e77-a49b-26eb41252303-tmp-dir\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.490630 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.491261 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bdb7201e-da3c-4c31-9cae-a139067e3a83-trusted-ca-bundle\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.492954 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-oauth-config\" (UniqueName: \"kubernetes.io/secret/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-oauth-config\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.495070 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"samples-operator-tls\" (UniqueName: \"kubernetes.io/secret/047f668a-55a2-4033-bd74-8df8d6ffd36f-samples-operator-tls\") pod \"cluster-samples-operator-6b564684c8-fvslw\" (UID: \"047f668a-55a2-4033-bd74-8df8d6ffd36f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.496428 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"console-serving-cert\" (UniqueName: \"kubernetes.io/secret/bdb7201e-da3c-4c31-9cae-a139067e3a83-console-serving-cert\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.497952 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/a980473e-0fae-4e77-a49b-26eb41252303-metrics-tls\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.498531 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/5a2a6f4b-477c-495b-a191-95a213405e0d-serving-cert\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.506377 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-p4kn6"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.508846 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.507105 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.512347 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.520174 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nsh7s"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.522545 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.530557 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.530840 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.531920 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.542933 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/471904d8-f887-45d3-89e4-efd24d2f7ab1-serving-cert\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.551468 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.552643 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.552680 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-dns/dns-default-mlfmh"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.552869 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.559873 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-8vvn5"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.560034 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.564823 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/471904d8-f887-45d3-89e4-efd24d2f7ab1-config\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.565881 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-24k6c"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566103 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-45854"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566117 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-trv9s"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566150 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5dggj"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566163 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566172 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566182 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-jjphf"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566193 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566207 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ingress-canary/ingress-canary-98l56"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.566255 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.570086 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.570389 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p5dvt"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.570770 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.575072 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-machine-config-operator/machine-config-server-6t2rf"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.575470 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581332 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-mkrsr"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581370 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581385 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-mtbpp"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581399 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mlfmh"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581414 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581428 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-plxfl"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581440 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581453 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-lr6gt"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581466 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581479 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581492 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581503 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581514 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-98l56"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581524 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581534 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t67z9"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581545 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581555 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581565 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581577 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581589 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-z57tz"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581599 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581609 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581620 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p5dvt"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581637 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581650 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nsh7s"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581663 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581717 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver/apiserver-9ddfb9f55-p4kn6"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.581867 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.589996 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.591803 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4mksp\" (UniqueName: \"kubernetes.io/projected/cd2d094e-247e-44e3-8732-bdf8582be0f4-kube-api-access-4mksp\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.591920 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d6a6d13-d350-414a-8d19-266287758441-etcd-client\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.592151 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b6498\" (UniqueName: \"kubernetes.io/projected/48769271-1362-41ce-a21c-ecf6d869aece-kube-api-access-b6498\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v4h4l\" (UID: \"48769271-1362-41ce-a21c-ecf6d869aece\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.592550 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d73c08bd-5e36-4e8e-9edf-0144d953a131-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.592648 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6894ed64-8531-4aaa-81ac-aed462d5fdb7-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.592781 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1609f854-4b62-4513-8fda-097c55c22a43-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.593745 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/48769271-1362-41ce-a21c-ecf6d869aece-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v4h4l\" (UID: \"48769271-1362-41ce-a21c-ecf6d869aece\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.593840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d8bnr\" (UniqueName: \"kubernetes.io/projected/04eba658-5e10-4d58-9238-75ce437d7bec-kube-api-access-d8bnr\") pod \"multus-admission-controller-69db94689b-z57tz\" (UID: \"04eba658-5e10-4d58-9238-75ce437d7bec\") " pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.593910 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594578 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6894ed64-8531-4aaa-81ac-aed462d5fdb7-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594626 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c818e404-5db9-4c23-893d-1cd602c404aa-tmp\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594659 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wn7hh\" (UniqueName: \"kubernetes.io/projected/c6337181-3803-4b34-bd4d-ab52efa11902-kube-api-access-wn7hh\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594679 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-default-certificate\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594748 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7mpws\" (UniqueName: \"kubernetes.io/projected/bc9ea72b-c9a6-4fbc-873a-133709e45add-kube-api-access-7mpws\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594828 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-46qxb\" (UniqueName: \"kubernetes.io/projected/6894ed64-8531-4aaa-81ac-aed462d5fdb7-kube-api-access-46qxb\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594858 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8jkw7\" (UniqueName: \"kubernetes.io/projected/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-kube-api-access-8jkw7\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594881 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6894ed64-8531-4aaa-81ac-aed462d5fdb7-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594922 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e2b1743-33bf-4099-b61b-444fe484becf-signing-key\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594960 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-srv-cert\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.594975 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-config-volume\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595007 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-profile-collector-cert\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595024 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b11d81e9-7489-4632-ab25-5a9fdc51a275-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-ntn75\" (UID: \"b11d81e9-7489-4632-ab25-5a9fdc51a275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595059 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf7dbe39-9107-49a2-ac48-8264fe632e4d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595078 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc9ea72b-c9a6-4fbc-873a-133709e45add-service-ca-bundle\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595094 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c08bd-5e36-4e8e-9edf-0144d953a131-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595152 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-secret-volume\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595170 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2d094e-247e-44e3-8732-bdf8582be0f4-config\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595216 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qtbx6\" (UniqueName: \"kubernetes.io/projected/4356887a-acbc-4cf4-8fe3-4cea5a46a05a-kube-api-access-qtbx6\") pod \"migrator-866fcbc849-h2827\" (UID: \"4356887a-acbc-4cf4-8fe3-4cea5a46a05a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595241 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-tmpfs\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595265 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1609f854-4b62-4513-8fda-097c55c22a43-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595287 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-etcd-ca\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595316 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6a6d13-d350-414a-8d19-266287758441-serving-cert\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595353 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf7dbe39-9107-49a2-ac48-8264fe632e4d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595380 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf7dbe39-9107-49a2-ac48-8264fe632e4d-images\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595397 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/32490300-66c7-458a-a8ab-7a434ac83d82-tmpfs\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595412 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d094e-247e-44e3-8732-bdf8582be0f4-serving-cert\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595434 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wdlkw\" (UniqueName: \"kubernetes.io/projected/32490300-66c7-458a-a8ab-7a434ac83d82-kube-api-access-wdlkw\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595459 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595488 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bmlc4\" (UniqueName: \"kubernetes.io/projected/cf7dbe39-9107-49a2-ac48-8264fe632e4d-kube-api-access-bmlc4\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595516 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32490300-66c7-458a-a8ab-7a434ac83d82-apiservice-cert\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595575 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c6337181-3803-4b34-bd4d-ab52efa11902-tmpfs\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595592 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w87p9\" (UniqueName: \"kubernetes.io/projected/3e2b1743-33bf-4099-b61b-444fe484becf-kube-api-access-w87p9\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.595867 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-stats-auth\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596637 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32490300-66c7-458a-a8ab-7a434ac83d82-webhook-cert\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596706 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c08bd-5e36-4e8e-9edf-0144d953a131-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596775 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tqd8w\" (UniqueName: \"kubernetes.io/projected/1609f854-4b62-4513-8fda-097c55c22a43-kube-api-access-tqd8w\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596792 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/c6337181-3803-4b34-bd4d-ab52efa11902-tmpfs\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596848 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6337181-3803-4b34-bd4d-ab52efa11902-srv-cert\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596878 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/04eba658-5e10-4d58-9238-75ce437d7bec-webhook-certs\") pod \"multus-admission-controller-69db94689b-z57tz\" (UID: \"04eba658-5e10-4d58-9238-75ce437d7bec\") " pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596947 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d73c08bd-5e36-4e8e-9edf-0144d953a131-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.596996 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/cf7dbe39-9107-49a2-ac48-8264fe632e4d-auth-proxy-config\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597083 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597190 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-config\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597362 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mxk74\" (UniqueName: \"kubernetes.io/projected/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-kube-api-access-mxk74\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597494 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-metrics-certs\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597536 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597703 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e2b1743-33bf-4099-b61b-444fe484becf-signing-cabundle\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.597918 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r9nz4\" (UniqueName: \"kubernetes.io/projected/7d6a6d13-d350-414a-8d19-266287758441-kube-api-access-r9nz4\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.598163 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mcc-auth-proxy-config\" (UniqueName: \"kubernetes.io/configmap/1609f854-4b62-4513-8fda-097c55c22a43-mcc-auth-proxy-config\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.598240 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ndkc4\" (UniqueName: \"kubernetes.io/projected/c818e404-5db9-4c23-893d-1cd602c404aa-kube-api-access-ndkc4\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.598363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6337181-3803-4b34-bd4d-ab52efa11902-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.598409 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6gbst\" (UniqueName: \"kubernetes.io/projected/b11d81e9-7489-4632-ab25-5a9fdc51a275-kube-api-access-6gbst\") pod \"package-server-manager-77f986bd66-ntn75\" (UID: \"b11d81e9-7489-4632-ab25-5a9fdc51a275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.598489 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d6a6d13-d350-414a-8d19-266287758441-tmp-dir\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.599045 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/7d6a6d13-d350-414a-8d19-266287758441-tmp-dir\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.599426 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-tmpfs\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.601317 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"image-registry-operator-tls\" (UniqueName: \"kubernetes.io/secret/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-image-registry-operator-tls\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: W0123 08:55:01.606938 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod195fafe4_a2c8_4665_88ea_f3540c149607.slice/crio-fbdad884cfd602b94f5f12b441f3396c6c56df991649e13c49640aa7c0e08124 WatchSource:0}: Error finding container fbdad884cfd602b94f5f12b441f3396c6c56df991649e13c49640aa7c0e08124: Status 404 returned error can't find the container with id fbdad884cfd602b94f5f12b441f3396c6c56df991649e13c49640aa7c0e08124 Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.609156 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.635850 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.638579 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-trusted-ca\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.648897 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.669099 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.677069 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"images\" (UniqueName: \"kubernetes.io/configmap/cf7dbe39-9107-49a2-ac48-8264fe632e4d-images\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705159 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-v86bg\" (UniqueName: \"kubernetes.io/projected/f40f0985-7ebd-4874-a0c7-d9b896588b6b-kube-api-access-v86bg\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705216 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b6498\" (UniqueName: \"kubernetes.io/projected/48769271-1362-41ce-a21c-ecf6d869aece-kube-api-access-b6498\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v4h4l\" (UID: \"48769271-1362-41ce-a21c-ecf6d869aece\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705240 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d73c08bd-5e36-4e8e-9edf-0144d953a131-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705271 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6894ed64-8531-4aaa-81ac-aed462d5fdb7-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705322 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/48769271-1362-41ce-a21c-ecf6d869aece-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v4h4l\" (UID: \"48769271-1362-41ce-a21c-ecf6d869aece\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705362 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705384 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6894ed64-8531-4aaa-81ac-aed462d5fdb7-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705409 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c818e404-5db9-4c23-893d-1cd602c404aa-tmp\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705438 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-46qxb\" (UniqueName: \"kubernetes.io/projected/6894ed64-8531-4aaa-81ac-aed462d5fdb7-kube-api-access-46qxb\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705465 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6894ed64-8531-4aaa-81ac-aed462d5fdb7-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705506 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e2b1743-33bf-4099-b61b-444fe484becf-signing-key\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705561 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-config-volume\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705633 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b11d81e9-7489-4632-ab25-5a9fdc51a275-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-ntn75\" (UID: \"b11d81e9-7489-4632-ab25-5a9fdc51a275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705684 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c08bd-5e36-4e8e-9edf-0144d953a131-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705712 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-secret-volume\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705736 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2d094e-247e-44e3-8732-bdf8582be0f4-config\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705783 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f40f0985-7ebd-4874-a0c7-d9b896588b6b-cert\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705853 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/32490300-66c7-458a-a8ab-7a434ac83d82-tmpfs\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705876 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d094e-247e-44e3-8732-bdf8582be0f4-serving-cert\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705900 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wdlkw\" (UniqueName: \"kubernetes.io/projected/32490300-66c7-458a-a8ab-7a434ac83d82-kube-api-access-wdlkw\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705934 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32490300-66c7-458a-a8ab-7a434ac83d82-apiservice-cert\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.705993 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w87p9\" (UniqueName: \"kubernetes.io/projected/3e2b1743-33bf-4099-b61b-444fe484becf-kube-api-access-w87p9\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706030 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32490300-66c7-458a-a8ab-7a434ac83d82-webhook-cert\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706053 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c08bd-5e36-4e8e-9edf-0144d953a131-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706085 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d73c08bd-5e36-4e8e-9edf-0144d953a131-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706120 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mxk74\" (UniqueName: \"kubernetes.io/projected/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-kube-api-access-mxk74\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706183 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706210 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e2b1743-33bf-4099-b61b-444fe484becf-signing-cabundle\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706222 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c818e404-5db9-4c23-893d-1cd602c404aa-tmp\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706257 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ndkc4\" (UniqueName: \"kubernetes.io/projected/c818e404-5db9-4c23-893d-1cd602c404aa-kube-api-access-ndkc4\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706289 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6gbst\" (UniqueName: \"kubernetes.io/projected/b11d81e9-7489-4632-ab25-5a9fdc51a275-kube-api-access-6gbst\") pod \"package-server-manager-77f986bd66-ntn75\" (UID: \"b11d81e9-7489-4632-ab25-5a9fdc51a275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706324 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4mksp\" (UniqueName: \"kubernetes.io/projected/cd2d094e-247e-44e3-8732-bdf8582be0f4-kube-api-access-4mksp\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.706408 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/d73c08bd-5e36-4e8e-9edf-0144d953a131-tmp\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.707343 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmpfs\" (UniqueName: \"kubernetes.io/empty-dir/32490300-66c7-458a-a8ab-7a434ac83d82-tmpfs\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.711907 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.713359 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.715412 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6cjff"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.719206 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.722611 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/cf7dbe39-9107-49a2-ac48-8264fe632e4d-proxy-tls\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.723104 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/network-metrics-daemon-gcn4t"] Jan 23 08:55:01 crc kubenswrapper[5117]: W0123 08:55:01.724562 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod07f1e5bf_d4d4_4e48_8973_9645b408072b.slice/crio-074a205a9ecdef0defc2ecb39f771749c8db5debaa8e4923c16635f26d68bb1b WatchSource:0}: Error finding container 074a205a9ecdef0defc2ecb39f771749c8db5debaa8e4923c16635f26d68bb1b: Status 404 returned error can't find the container with id 074a205a9ecdef0defc2ecb39f771749c8db5debaa8e4923c16635f26d68bb1b Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.725201 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/machine-api-operator-755bb95488-gr9mz"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.745041 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-oauth-apiserver/apiserver-8596bd845d-24k6c"] Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.769171 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.776553 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.776825 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.778509 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/7d6a6d13-d350-414a-8d19-266287758441-serving-cert\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.788977 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-certs\" (UniqueName: \"kubernetes.io/secret/04eba658-5e10-4d58-9238-75ce437d7bec-webhook-certs\") pod \"multus-admission-controller-69db94689b-z57tz\" (UID: \"04eba658-5e10-4d58-9238-75ce437d7bec\") " pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.789709 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-client\" (UniqueName: \"kubernetes.io/secret/7d6a6d13-d350-414a-8d19-266287758441-etcd-client\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.790753 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 23 08:55:01 crc kubenswrapper[5117]: W0123 08:55:01.792220 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod2da5d032_059b_467c_89ec_6d81958200ca.slice/crio-a31088b50a8ecac73b8dcb03c630bc60f4a3be6a818fb092c8c023f300b33162 WatchSource:0}: Error finding container a31088b50a8ecac73b8dcb03c630bc60f4a3be6a818fb092c8c023f300b33162: Status 404 returned error can't find the container with id a31088b50a8ecac73b8dcb03c630bc60f4a3be6a818fb092c8c023f300b33162 Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.797098 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-service-ca\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-etcd-service-ca\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.809102 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-v86bg\" (UniqueName: \"kubernetes.io/projected/f40f0985-7ebd-4874-a0c7-d9b896588b6b-kube-api-access-v86bg\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.809257 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f40f0985-7ebd-4874-a0c7-d9b896588b6b-cert\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.809529 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.817748 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etcd-ca\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-etcd-ca\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.829953 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.865340 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ljd6n\" (UniqueName: \"kubernetes.io/projected/5fbb2477-dcc9-41a1-9f94-638266460dc2-kube-api-access-ljd6n\") pod \"openshift-config-operator-5777786469-plxfl\" (UID: \"5fbb2477-dcc9-41a1-9f94-638266460dc2\") " pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.885036 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rxhbr\" (UniqueName: \"kubernetes.io/projected/92a6e0b4-4edc-47e6-8096-74e02438118b-kube-api-access-rxhbr\") pod \"authentication-operator-7f5c659b84-45854\" (UID: \"92a6e0b4-4edc-47e6-8096-74e02438118b\") " pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.888974 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.909100 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.918777 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/7d6a6d13-d350-414a-8d19-266287758441-config\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.930818 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.958779 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.967023 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qj4zb\" (UniqueName: \"kubernetes.io/projected/f399bc34-e976-4cd2-90df-40a3d08fb983-kube-api-access-qj4zb\") pod \"oauth-openshift-66458b6674-5dggj\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.969756 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.981299 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-certificate\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-default-certificate\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:01 crc kubenswrapper[5117]: I0123 08:55:01.989287 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.001363 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-certs\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-metrics-certs\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.009425 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.022703 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"stats-auth\" (UniqueName: \"kubernetes.io/secret/bc9ea72b-c9a6-4fbc-873a-133709e45add-stats-auth\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.029239 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.029767 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.035725 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.037549 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/bc9ea72b-c9a6-4fbc-873a-133709e45add-service-ca-bundle\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.048873 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.084562 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mtchx\" (UniqueName: \"kubernetes.io/projected/5cef46b3-a580-4159-a021-b878a03e1e39-kube-api-access-mtchx\") pod \"console-operator-67c89758df-trv9s\" (UID: \"5cef46b3-a580-4159-a021-b878a03e1e39\") " pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.090914 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.114219 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.130744 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-config-operator/openshift-config-operator-5777786469-plxfl"] Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.139767 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:02 crc kubenswrapper[5117]: W0123 08:55:02.146730 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5fbb2477_dcc9_41a1_9f94_638266460dc2.slice/crio-7c96e79a76cf675bd2beef0b94c47119d7d508061e226de273f2def220d9a39e WatchSource:0}: Error finding container 7c96e79a76cf675bd2beef0b94c47119d7d508061e226de273f2def220d9a39e: Status 404 returned error can't find the container with id 7c96e79a76cf675bd2beef0b94c47119d7d508061e226de273f2def220d9a39e Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.150501 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.170269 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.189226 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.204351 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-srv-cert\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.212229 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.223220 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/c6337181-3803-4b34-bd4d-ab52efa11902-profile-collector-cert\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.224978 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-secret-volume\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.227443 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"profile-collector-cert\" (UniqueName: \"kubernetes.io/secret/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-profile-collector-cert\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.231339 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.236406 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5dggj"] Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.248585 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.270096 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.275266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-tls\" (UniqueName: \"kubernetes.io/secret/1609f854-4b62-4513-8fda-097c55c22a43-proxy-tls\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.278463 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication-operator/authentication-operator-7f5c659b84-45854"] Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.289255 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.309765 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.329861 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.335786 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console-operator/console-operator-67c89758df-trv9s"] Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.349073 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 23 08:55:02 crc kubenswrapper[5117]: W0123 08:55:02.358111 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5cef46b3_a580_4159_a021_b878a03e1e39.slice/crio-f05c249e6513390ea7d837daa7628c93286c04ff88e54fb1865277316179787d WatchSource:0}: Error finding container f05c249e6513390ea7d837daa7628c93286c04ff88e54fb1865277316179787d: Status 404 returned error can't find the container with id f05c249e6513390ea7d837daa7628c93286c04ff88e54fb1865277316179787d Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.358344 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" event={"ID":"5fbb2477-dcc9-41a1-9f94-638266460dc2","Type":"ContainerStarted","Data":"7c96e79a76cf675bd2beef0b94c47119d7d508061e226de273f2def220d9a39e"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.362516 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" event={"ID":"195fafe4-a2c8-4665-88ea-f3540c149607","Type":"ContainerStarted","Data":"3fcb756b16f9a734b9b4dcbeaddc83408e582e361338894f4757e5bebd196240"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.362569 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" event={"ID":"195fafe4-a2c8-4665-88ea-f3540c149607","Type":"ContainerStarted","Data":"fbdad884cfd602b94f5f12b441f3396c6c56df991649e13c49640aa7c0e08124"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.366444 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"srv-cert\" (UniqueName: \"kubernetes.io/secret/c6337181-3803-4b34-bd4d-ab52efa11902-srv-cert\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.367010 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" event={"ID":"4fbeda70-518a-438d-9cd2-3d6d01898eaa","Type":"ContainerStarted","Data":"a50310cd743306bbc0cb6d9d1225ab7f773c575989711dc9dae9763167b8d19f"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.367165 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" event={"ID":"4fbeda70-518a-438d-9cd2-3d6d01898eaa","Type":"ContainerStarted","Data":"35433c25f7ac88da14695768becad28d41ab91ffca450532230c564e7816fdb9"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.369887 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.374653 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" event={"ID":"3b258694-fb3a-4669-bd71-93214f0695c6","Type":"ContainerStarted","Data":"284da3f2141b2cf60406507313a5bbe8ad31eed8e3cb068e37302957b45d1425"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.374714 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" event={"ID":"3b258694-fb3a-4669-bd71-93214f0695c6","Type":"ContainerStarted","Data":"161f727c61c0c4ecdfff5c92c82200c26202c9426445c37002dffbd402b72daa"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.377557 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" event={"ID":"92a6e0b4-4edc-47e6-8096-74e02438118b","Type":"ContainerStarted","Data":"a7f1209b007e4b3ee8e8c0639a99854e91bd70a09044c6fc1f7444200be348b8"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.380635 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"control-plane-machine-set-operator-tls\" (UniqueName: \"kubernetes.io/secret/48769271-1362-41ce-a21c-ecf6d869aece-control-plane-machine-set-operator-tls\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v4h4l\" (UID: \"48769271-1362-41ce-a21c-ecf6d869aece\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.382381 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" event={"ID":"2da5d032-059b-467c-89ec-6d81958200ca","Type":"ContainerStarted","Data":"2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.382417 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" event={"ID":"2da5d032-059b-467c-89ec-6d81958200ca","Type":"ContainerStarted","Data":"a31088b50a8ecac73b8dcb03c630bc60f4a3be6a818fb092c8c023f300b33162"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.388262 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gcn4t" event={"ID":"ad3e9798-99d9-456f-b969-840508a6ac91","Type":"ContainerStarted","Data":"1b98dc8fc90df1a359ca9fd363d654645e5f0f0d94fded9b7234db79e94fb09c"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.388319 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gcn4t" event={"ID":"ad3e9798-99d9-456f-b969-840508a6ac91","Type":"ContainerStarted","Data":"ade2732d0248a64a762f8b861f2e6eb05522b5a4d3da927624db69414b4f3bd0"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.390330 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.390995 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.393173 5117 patch_prober.go:28] interesting pod/route-controller-manager-776cdc94d6-brg6c container/route-controller-manager namespace/openshift-route-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" start-of-body= Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.393247 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" podUID="2da5d032-059b-467c-89ec-6d81958200ca" containerName="route-controller-manager" probeResult="failure" output="Get \"https://10.217.0.9:8443/healthz\": dial tcp 10.217.0.9:8443: connect: connection refused" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.393270 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" event={"ID":"f399bc34-e976-4cd2-90df-40a3d08fb983","Type":"ContainerStarted","Data":"9dd78f0328d0e245357f6ea37c38e562b0b33e8122be8125b5adfe45a8f60b56"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.407884 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" event={"ID":"1f4bb843-1670-4180-9fe6-43c005930de0","Type":"ContainerStarted","Data":"a3485e2ed1881d00fea931d9b59a707b4dcc0568e952045e90965f7fcfed7aff"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.407989 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" event={"ID":"1f4bb843-1670-4180-9fe6-43c005930de0","Type":"ContainerStarted","Data":"07f1a7212b4fc4f3ebb21531d943e6483ca62de72ad7a60c910fe8b0f0cd990f"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.409395 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.414323 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" event={"ID":"07f1e5bf-d4d4-4e48-8973-9645b408072b","Type":"ContainerStarted","Data":"9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.414395 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" event={"ID":"07f1e5bf-d4d4-4e48-8973-9645b408072b","Type":"ContainerStarted","Data":"074a205a9ecdef0defc2ecb39f771749c8db5debaa8e4923c16635f26d68bb1b"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.414425 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.417196 5117 patch_prober.go:28] interesting pod/controller-manager-65b6cccf98-6cjff container/controller-manager namespace/openshift-controller-manager: Readiness probe status=failure output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" start-of-body= Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.417244 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" podUID="07f1e5bf-d4d4-4e48-8973-9645b408072b" containerName="controller-manager" probeResult="failure" output="Get \"https://10.217.0.6:8443/healthz\": dial tcp 10.217.0.6:8443: connect: connection refused" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.418325 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" event={"ID":"bf50dcce-4975-442b-980a-da2e9b937d0e","Type":"ContainerStarted","Data":"b015d1347b2d46deef0422baed78a3c9c80889571109edd3e6b04657f944d7f8"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.418362 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" event={"ID":"bf50dcce-4975-442b-980a-da2e9b937d0e","Type":"ContainerStarted","Data":"4ee64839c74d450f4d0daed46fbd6d84a1c41e01ec691dfe0e49c647a93f726c"} Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.428762 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.447289 5117 request.go:752] "Waited before sending request" delay="1.01431787s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/openshift-ingress-operator/secrets?fieldSelector=metadata.name%3Dmetrics-tls&limit=500&resourceVersion=0" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.453391 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.461217 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/6894ed64-8531-4aaa-81ac-aed462d5fdb7-metrics-tls\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.480288 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.489739 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/6894ed64-8531-4aaa-81ac-aed462d5fdb7-trusted-ca\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.497239 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.509914 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.527056 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.531560 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.556080 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.558947 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.571547 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.589415 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.610007 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.625485 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"package-server-manager-serving-cert\" (UniqueName: \"kubernetes.io/secret/b11d81e9-7489-4632-ab25-5a9fdc51a275-package-server-manager-serving-cert\") pod \"package-server-manager-77f986bd66-ntn75\" (UID: \"b11d81e9-7489-4632-ab25-5a9fdc51a275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.629645 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.642238 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/32490300-66c7-458a-a8ab-7a434ac83d82-apiservice-cert\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.642774 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/32490300-66c7-458a-a8ab-7a434ac83d82-webhook-cert\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.649302 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.657608 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-config-volume\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.671727 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.691581 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707757 5117 secret.go:189] Couldn't get secret openshift-kube-scheduler-operator/kube-scheduler-operator-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707795 5117 configmap.go:193] Couldn't get configMap openshift-service-ca-operator/service-ca-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707844 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/d73c08bd-5e36-4e8e-9edf-0144d953a131-serving-cert podName:d73c08bd-5e36-4e8e-9edf-0144d953a131 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.207822987 +0000 UTC m=+114.963948013 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/d73c08bd-5e36-4e8e-9edf-0144d953a131-serving-cert") pod "openshift-kube-scheduler-operator-54f497555d-nt8cs" (UID: "d73c08bd-5e36-4e8e-9edf-0144d953a131") : failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707889 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/cd2d094e-247e-44e3-8732-bdf8582be0f4-config podName:cd2d094e-247e-44e3-8732-bdf8582be0f4 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.207867458 +0000 UTC m=+114.963992484 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/cd2d094e-247e-44e3-8732-bdf8582be0f4-config") pod "service-ca-operator-5b9c976747-p7sx2" (UID: "cd2d094e-247e-44e3-8732-bdf8582be0f4") : failed to sync configmap cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707910 5117 secret.go:189] Couldn't get secret openshift-service-ca-operator/serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707933 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/cd2d094e-247e-44e3-8732-bdf8582be0f4-serving-cert podName:cd2d094e-247e-44e3-8732-bdf8582be0f4 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.20792781 +0000 UTC m=+114.964052836 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "serving-cert" (UniqueName: "kubernetes.io/secret/cd2d094e-247e-44e3-8732-bdf8582be0f4-serving-cert") pod "service-ca-operator-5b9c976747-p7sx2" (UID: "cd2d094e-247e-44e3-8732-bdf8582be0f4") : failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707957 5117 configmap.go:193] Couldn't get configMap openshift-service-ca/signing-cabundle: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707946 5117 configmap.go:193] Couldn't get configMap openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-config: failed to sync configmap cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707990 5117 secret.go:189] Couldn't get secret openshift-service-ca/signing-key: failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.707976 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/3e2b1743-33bf-4099-b61b-444fe484becf-signing-cabundle podName:3e2b1743-33bf-4099-b61b-444fe484becf nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.207970921 +0000 UTC m=+114.964095947 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-cabundle" (UniqueName: "kubernetes.io/configmap/3e2b1743-33bf-4099-b61b-444fe484becf-signing-cabundle") pod "service-ca-74545575db-mtbpp" (UID: "3e2b1743-33bf-4099-b61b-444fe484becf") : failed to sync configmap cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.708096 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/3e2b1743-33bf-4099-b61b-444fe484becf-signing-key podName:3e2b1743-33bf-4099-b61b-444fe484becf nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.208058764 +0000 UTC m=+114.964183860 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "signing-key" (UniqueName: "kubernetes.io/secret/3e2b1743-33bf-4099-b61b-444fe484becf-signing-key") pod "service-ca-74545575db-mtbpp" (UID: "3e2b1743-33bf-4099-b61b-444fe484becf") : failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.708113 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/configmap/d73c08bd-5e36-4e8e-9edf-0144d953a131-config podName:d73c08bd-5e36-4e8e-9edf-0144d953a131 nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.208105125 +0000 UTC m=+114.964230241 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "config" (UniqueName: "kubernetes.io/configmap/d73c08bd-5e36-4e8e-9edf-0144d953a131-config") pod "openshift-kube-scheduler-operator-54f497555d-nt8cs" (UID: "d73c08bd-5e36-4e8e-9edf-0144d953a131") : failed to sync configmap cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.708899 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.728908 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.750819 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.769414 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.805693 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-g6ddt\" (UniqueName: \"kubernetes.io/projected/047f668a-55a2-4033-bd74-8df8d6ffd36f-kube-api-access-g6ddt\") pod \"cluster-samples-operator-6b564684c8-fvslw\" (UID: \"047f668a-55a2-4033-bd74-8df8d6ffd36f\") " pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.809513 5117 secret.go:189] Couldn't get secret openshift-ingress-canary/canary-serving-cert: failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: E0123 08:55:02.809597 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/f40f0985-7ebd-4874-a0c7-d9b896588b6b-cert podName:f40f0985-7ebd-4874-a0c7-d9b896588b6b nodeName:}" failed. No retries permitted until 2026-01-23 08:55:03.309573812 +0000 UTC m=+115.065698838 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "cert" (UniqueName: "kubernetes.io/secret/f40f0985-7ebd-4874-a0c7-d9b896588b6b-cert") pod "ingress-canary-98l56" (UID: "f40f0985-7ebd-4874-a0c7-d9b896588b6b") : failed to sync secret cache: timed out waiting for the condition Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.835294 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-thh6t\" (UniqueName: \"kubernetes.io/projected/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-kube-api-access-thh6t\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.859481 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-76rp4\" (UniqueName: \"kubernetes.io/projected/a980473e-0fae-4e77-a49b-26eb41252303-kube-api-access-76rp4\") pod \"dns-operator-799b87ffcd-jjphf\" (UID: \"a980473e-0fae-4e77-a49b-26eb41252303\") " pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.864805 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rvgdv\" (UniqueName: \"kubernetes.io/projected/5c20ff45-85a7-456d-90ae-85845c4aec43-kube-api-access-rvgdv\") pod \"downloads-747b44746d-lr6gt\" (UID: \"5c20ff45-85a7-456d-90ae-85845c4aec43\") " pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.890118 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/471904d8-f887-45d3-89e4-efd24d2f7ab1-kube-api-access\") pod \"kube-controller-manager-operator-69d5f845f8-8fjq9\" (UID: \"471904d8-f887-45d3-89e4-efd24d2f7ab1\") " pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.904364 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n9v29\" (UniqueName: \"kubernetes.io/projected/5a2a6f4b-477c-495b-a191-95a213405e0d-kube-api-access-n9v29\") pod \"openshift-controller-manager-operator-686468bdd5-wvl5x\" (UID: \"5a2a6f4b-477c-495b-a191-95a213405e0d\") " pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.928924 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mvwj9\" (UniqueName: \"kubernetes.io/projected/6f3ffb03-c769-473c-8ab8-02d4d3faae5b-kube-api-access-mvwj9\") pod \"kube-storage-version-migrator-operator-565b79b866-w9zxf\" (UID: \"6f3ffb03-c769-473c-8ab8-02d4d3faae5b\") " pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.948666 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xdmvc\" (UniqueName: \"kubernetes.io/projected/bdb7201e-da3c-4c31-9cae-a139067e3a83-kube-api-access-xdmvc\") pod \"console-64d44f6ddf-mkrsr\" (UID: \"bdb7201e-da3c-4c31-9cae-a139067e3a83\") " pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.949691 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34778: no serving certificate available for the kubelet" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.968462 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5-bound-sa-token\") pod \"cluster-image-registry-operator-86c45576b9-qhnxl\" (UID: \"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5\") " pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.969758 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.974611 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.975618 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34794: no serving certificate available for the kubelet" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.984763 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.989119 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 23 08:55:02 crc kubenswrapper[5117]: I0123 08:55:02.998505 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.009070 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.019881 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.025555 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.029058 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.036901 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.046453 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.049388 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.052418 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34800: no serving certificate available for the kubelet" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.059895 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.073299 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.089508 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.111817 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.129748 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.149628 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.149981 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34804: no serving certificate available for the kubelet" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.169075 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.192372 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.211000 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.220716 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34818: no serving certificate available for the kubelet" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.232776 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.233792 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c08bd-5e36-4e8e-9edf-0144d953a131-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.233824 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2d094e-247e-44e3-8732-bdf8582be0f4-config\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.233877 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d094e-247e-44e3-8732-bdf8582be0f4-serving-cert\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.233917 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c08bd-5e36-4e8e-9edf-0144d953a131-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.233950 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e2b1743-33bf-4099-b61b-444fe484becf-signing-cabundle\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.234034 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e2b1743-33bf-4099-b61b-444fe484becf-signing-key\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.235772 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/cd2d094e-247e-44e3-8732-bdf8582be0f4-config\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.236307 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-cabundle\" (UniqueName: \"kubernetes.io/configmap/3e2b1743-33bf-4099-b61b-444fe484becf-signing-cabundle\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.243737 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"signing-key\" (UniqueName: \"kubernetes.io/secret/3e2b1743-33bf-4099-b61b-444fe484becf-signing-key\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.243791 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/cd2d094e-247e-44e3-8732-bdf8582be0f4-serving-cert\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.245570 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/d73c08bd-5e36-4e8e-9edf-0144d953a131-serving-cert\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.250557 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.271778 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.291536 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.335535 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.335833 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.340044 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f40f0985-7ebd-4874-a0c7-d9b896588b6b-cert\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.343803 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34826: no serving certificate available for the kubelet" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.487861 5117 request.go:752] "Waited before sending request" delay="1.912037528s" reason="client-side throttling, not priority and fairness" verb="GET" URL="https://api-int.crc.testing:6443/api/v1/namespaces/hostpath-provisioner/secrets?fieldSelector=metadata.name%3Dcsi-hostpath-provisioner-sa-dockercfg-7dcws&limit=500&resourceVersion=0" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.489878 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.531825 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34832: no serving certificate available for the kubelet" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.705518 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8jkw7\" (UniqueName: \"kubernetes.io/projected/28b236a1-f9c2-4e4d-9faa-8ed9919a1976-kube-api-access-8jkw7\") pod \"olm-operator-5cdf44d969-vm6rr\" (UID: \"28b236a1-f9c2-4e4d-9faa-8ed9919a1976\") " pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.872336 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34834: no serving certificate available for the kubelet" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.884942 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/d73c08bd-5e36-4e8e-9edf-0144d953a131-config\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.893077 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.900635 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.902983 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.903469 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.903664 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.903842 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.904017 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.904205 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.904365 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.904579 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.904754 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-sysctl-allowlist\"" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.916734 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6gbst\" (UniqueName: \"kubernetes.io/projected/b11d81e9-7489-4632-ab25-5a9fdc51a275-kube-api-access-6gbst\") pod \"package-server-manager-77f986bd66-ntn75\" (UID: \"b11d81e9-7489-4632-ab25-5a9fdc51a275\") " pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.922362 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-46qxb\" (UniqueName: \"kubernetes.io/projected/6894ed64-8531-4aaa-81ac-aed462d5fdb7-kube-api-access-46qxb\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.927784 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert\" (UniqueName: \"kubernetes.io/secret/f40f0985-7ebd-4874-a0c7-d9b896588b6b-cert\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.956636 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qtbx6\" (UniqueName: \"kubernetes.io/projected/4356887a-acbc-4cf4-8fe3-4cea5a46a05a-kube-api-access-qtbx6\") pod \"migrator-866fcbc849-h2827\" (UID: \"4356887a-acbc-4cf4-8fe3-4cea5a46a05a\") " pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.958270 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b6498\" (UniqueName: \"kubernetes.io/projected/48769271-1362-41ce-a21c-ecf6d869aece-kube-api-access-b6498\") pod \"control-plane-machine-set-operator-75ffdb6fcd-v4h4l\" (UID: \"48769271-1362-41ce-a21c-ecf6d869aece\") " pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.958574 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" event={"ID":"f399bc34-e976-4cd2-90df-40a3d08fb983","Type":"ContainerStarted","Data":"0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e"} Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.964246 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.964305 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/console-64d44f6ddf-mkrsr"] Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.964521 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w87p9\" (UniqueName: \"kubernetes.io/projected/3e2b1743-33bf-4099-b61b-444fe484becf-kube-api-access-w87p9\") pod \"service-ca-74545575db-mtbpp\" (UID: \"3e2b1743-33bf-4099-b61b-444fe484becf\") " pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.964753 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.964926 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r9nz4\" (UniqueName: \"kubernetes.io/projected/7d6a6d13-d350-414a-8d19-266287758441-kube-api-access-r9nz4\") pod \"etcd-operator-69b85846b6-nkrmv\" (UID: \"7d6a6d13-d350-414a-8d19-266287758441\") " pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.976187 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.996861 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" event={"ID":"1f4bb843-1670-4180-9fe6-43c005930de0","Type":"ContainerStarted","Data":"d258d086de6f5a9609dfe77b2de4bd5eaac5aed37bda890b8264d27df6daf3c0"} Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.997358 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/6894ed64-8531-4aaa-81ac-aed462d5fdb7-bound-sa-token\") pod \"ingress-operator-6b9cb4dbcf-7jjtt\" (UID: \"6894ed64-8531-4aaa-81ac-aed462d5fdb7\") " pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:03 crc kubenswrapper[5117]: I0123 08:55:03.998829 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d8bnr\" (UniqueName: \"kubernetes.io/projected/04eba658-5e10-4d58-9238-75ce437d7bec-kube-api-access-d8bnr\") pod \"multus-admission-controller-69db94689b-z57tz\" (UID: \"04eba658-5e10-4d58-9238-75ce437d7bec\") " pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.002966 5117 generic.go:358] "Generic (PLEG): container finished" podID="bf50dcce-4975-442b-980a-da2e9b937d0e" containerID="b015d1347b2d46deef0422baed78a3c9c80889571109edd3e6b04657f944d7f8" exitCode=0 Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.003099 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" event={"ID":"bf50dcce-4975-442b-980a-da2e9b937d0e","Type":"ContainerDied","Data":"b015d1347b2d46deef0422baed78a3c9c80889571109edd3e6b04657f944d7f8"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.003203 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" event={"ID":"bf50dcce-4975-442b-980a-da2e9b937d0e","Type":"ContainerStarted","Data":"38150bc44e56a890c66c48b3ec68aa1054c8d8db63f6085565b67018cba270ca"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.006452 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7mpws\" (UniqueName: \"kubernetes.io/projected/bc9ea72b-c9a6-4fbc-873a-133709e45add-kube-api-access-7mpws\") pod \"router-default-68cf44c8b8-mlq9j\" (UID: \"bc9ea72b-c9a6-4fbc-873a-133709e45add\") " pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.007236 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tqd8w\" (UniqueName: \"kubernetes.io/projected/1609f854-4b62-4513-8fda-097c55c22a43-kube-api-access-tqd8w\") pod \"machine-config-controller-f9cdd68f7-snhxl\" (UID: \"1609f854-4b62-4513-8fda-097c55c22a43\") " pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.007365 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ndkc4\" (UniqueName: \"kubernetes.io/projected/c818e404-5db9-4c23-893d-1cd602c404aa-kube-api-access-ndkc4\") pod \"marketplace-operator-547dbd544d-t67z9\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.007496 5117 generic.go:358] "Generic (PLEG): container finished" podID="5fbb2477-dcc9-41a1-9f94-638266460dc2" containerID="86e54b5dc13cb8fe4dfc0e37ae655fbfcbbd4b18c9a317b09041d5eb4d85b2bc" exitCode=0 Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.007963 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wn7hh\" (UniqueName: \"kubernetes.io/projected/c6337181-3803-4b34-bd4d-ab52efa11902-kube-api-access-wn7hh\") pod \"catalog-operator-75ff9f647d-cx7rb\" (UID: \"c6337181-3803-4b34-bd4d-ab52efa11902\") " pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.007992 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" event={"ID":"5fbb2477-dcc9-41a1-9f94-638266460dc2","Type":"ContainerDied","Data":"86e54b5dc13cb8fe4dfc0e37ae655fbfcbbd4b18c9a317b09041d5eb4d85b2bc"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.008537 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bmlc4\" (UniqueName: \"kubernetes.io/projected/cf7dbe39-9107-49a2-ac48-8264fe632e4d-kube-api-access-bmlc4\") pod \"machine-config-operator-67c9d58cbb-h4kwm\" (UID: \"cf7dbe39-9107-49a2-ac48-8264fe632e4d\") " pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.008923 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4mksp\" (UniqueName: \"kubernetes.io/projected/cd2d094e-247e-44e3-8732-bdf8582be0f4-kube-api-access-4mksp\") pod \"service-ca-operator-5b9c976747-p7sx2\" (UID: \"cd2d094e-247e-44e3-8732-bdf8582be0f4\") " pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.010287 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/d73c08bd-5e36-4e8e-9edf-0144d953a131-kube-api-access\") pod \"openshift-kube-scheduler-operator-54f497555d-nt8cs\" (UID: \"d73c08bd-5e36-4e8e-9edf-0144d953a131\") " pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.013825 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-v86bg\" (UniqueName: \"kubernetes.io/projected/f40f0985-7ebd-4874-a0c7-d9b896588b6b-kube-api-access-v86bg\") pod \"ingress-canary-98l56\" (UID: \"f40f0985-7ebd-4874-a0c7-d9b896588b6b\") " pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.014384 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mxk74\" (UniqueName: \"kubernetes.io/projected/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-kube-api-access-mxk74\") pod \"collect-profiles-29485965-7b78f\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.015218 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wdlkw\" (UniqueName: \"kubernetes.io/projected/32490300-66c7-458a-a8ab-7a434ac83d82-kube-api-access-wdlkw\") pod \"packageserver-7d4fc7d867-cmk6w\" (UID: \"32490300-66c7-458a-a8ab-7a434ac83d82\") " pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.016924 5117 generic.go:358] "Generic (PLEG): container finished" podID="4fbeda70-518a-438d-9cd2-3d6d01898eaa" containerID="a50310cd743306bbc0cb6d9d1225ab7f773c575989711dc9dae9763167b8d19f" exitCode=0 Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.017025 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" event={"ID":"4fbeda70-518a-438d-9cd2-3d6d01898eaa","Type":"ContainerDied","Data":"a50310cd743306bbc0cb6d9d1225ab7f773c575989711dc9dae9763167b8d19f"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.035428 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.039239 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.040450 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" event={"ID":"92a6e0b4-4edc-47e6-8096-74e02438118b","Type":"ContainerStarted","Data":"34fa45a4e96548aa3b47774c905478321528404d68a0b6de666258d418539978"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.042201 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.043497 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.055614 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.057037 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/network-metrics-daemon-gcn4t" event={"ID":"ad3e9798-99d9-456f-b969-840508a6ac91","Type":"ContainerStarted","Data":"ea75d68db15eb5d8be5842f5060105e3469927ba95001e98c572b96a65ed9b81"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.062729 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.062915 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.063528 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/183f2899-9fc9-48db-b0f6-7222c7364b50-certs\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.063649 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3884778-855d-4be3-aeab-4a9552ec10ac-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.063747 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qx9p9\" (UniqueName: \"kubernetes.io/projected/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-kube-api-access-qx9p9\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.063933 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3668af23-b087-479a-b9d8-d6e8b963ce57-ca-trust-extracted\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064081 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58cl6\" (UniqueName: \"kubernetes.io/projected/183f2899-9fc9-48db-b0f6-7222c7364b50-kube-api-access-58cl6\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064240 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-certificates\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064395 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pgpkx\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-kube-api-access-pgpkx\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064499 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e7fef31-4908-4580-a0b5-96af5af3dc55-metrics-tls\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064532 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e7fef31-4908-4580-a0b5-96af5af3dc55-tmp-dir\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064569 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-bound-sa-token\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064584 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-registration-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064614 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-trusted-ca\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064631 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/348d4ab5-2b83-4443-9024-8d7d3db346d7-tmp-dir\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064704 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c3884778-855d-4be3-aeab-4a9552ec10ac-ready\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064780 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7fef31-4908-4580-a0b5-96af5af3dc55-config-volume\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.064824 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.065422 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-b7pw6\" (UniqueName: \"kubernetes.io/projected/c3884778-855d-4be3-aeab-4a9552ec10ac-kube-api-access-b7pw6\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.065695 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ql4wp\" (UniqueName: \"kubernetes.io/projected/5e7fef31-4908-4580-a0b5-96af5af3dc55-kube-api-access-ql4wp\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.065744 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-csi-data-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.065874 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-socket-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.065921 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/183f2899-9fc9-48db-b0f6-7222c7364b50-node-bootstrap-token\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.065956 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348d4ab5-2b83-4443-9024-8d7d3db346d7-serving-cert\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066037 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-tls\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066061 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348d4ab5-2b83-4443-9024-8d7d3db346d7-config\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066108 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3668af23-b087-479a-b9d8-d6e8b963ce57-installation-pull-secrets\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066152 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/348d4ab5-2b83-4443-9024-8d7d3db346d7-kube-api-access\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066186 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-mountpoint-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066233 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3884778-855d-4be3-aeab-4a9552ec10ac-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.066321 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-plugins-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.068235 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.568219768 +0000 UTC m=+116.324344794 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.072954 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.073633 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca/service-ca-74545575db-mtbpp" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.077151 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-trv9s" event={"ID":"5cef46b3-a580-4159-a021-b878a03e1e39","Type":"ContainerStarted","Data":"5bc8af799cf983e1db8a83b32f0b4a5a17600a5c1b9464fe412c529a3cac20e5"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.077207 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console-operator/console-operator-67c89758df-trv9s" event={"ID":"5cef46b3-a580-4159-a021-b878a03e1e39","Type":"ContainerStarted","Data":"f05c249e6513390ea7d837daa7628c93286c04ff88e54fb1865277316179787d"} Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.079977 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress-canary/ingress-canary-98l56" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.080096 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.084879 5117 patch_prober.go:28] interesting pod/console-operator-67c89758df-trv9s container/console-operator namespace/openshift-console-operator: Readiness probe status=failure output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" start-of-body= Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.084949 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console-operator/console-operator-67c89758df-trv9s" podUID="5cef46b3-a580-4159-a021-b878a03e1e39" containerName="console-operator" probeResult="failure" output="Get \"https://10.217.0.11:8443/readyz\": dial tcp 10.217.0.11:8443: connect: connection refused" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.088580 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.090857 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.098592 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.102443 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.172836 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.173098 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.673078168 +0000 UTC m=+116.429203194 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.173406 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58cl6\" (UniqueName: \"kubernetes.io/projected/183f2899-9fc9-48db-b0f6-7222c7364b50-kube-api-access-58cl6\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.174896 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-certificates\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.174958 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pgpkx\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-kube-api-access-pgpkx\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.174984 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e7fef31-4908-4580-a0b5-96af5af3dc55-metrics-tls\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.175025 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e7fef31-4908-4580-a0b5-96af5af3dc55-tmp-dir\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.175754 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/5e7fef31-4908-4580-a0b5-96af5af3dc55-tmp-dir\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.175980 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-certificates\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176090 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-bound-sa-token\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176108 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-registration-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176167 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-trusted-ca\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176183 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/348d4ab5-2b83-4443-9024-8d7d3db346d7-tmp-dir\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176488 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c3884778-855d-4be3-aeab-4a9552ec10ac-ready\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176853 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c3884778-855d-4be3-aeab-4a9552ec10ac-ready\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.176875 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7fef31-4908-4580-a0b5-96af5af3dc55-config-volume\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.177047 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registration-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-registration-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.179437 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/5e7fef31-4908-4580-a0b5-96af5af3dc55-config-volume\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.179895 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.180126 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.68011289 +0000 UTC m=+116.436237916 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.180743 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-b7pw6\" (UniqueName: \"kubernetes.io/projected/c3884778-855d-4be3-aeab-4a9552ec10ac-kube-api-access-b7pw6\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.181802 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ql4wp\" (UniqueName: \"kubernetes.io/projected/5e7fef31-4908-4580-a0b5-96af5af3dc55-kube-api-access-ql4wp\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.181829 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-csi-data-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.182252 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"csi-data-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-csi-data-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.182838 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-socket-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.182869 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/183f2899-9fc9-48db-b0f6-7222c7364b50-node-bootstrap-token\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.183842 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348d4ab5-2b83-4443-9024-8d7d3db346d7-serving-cert\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.183929 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-tls\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.183957 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348d4ab5-2b83-4443-9024-8d7d3db346d7-config\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.184915 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-socket-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.187895 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3668af23-b087-479a-b9d8-d6e8b963ce57-installation-pull-secrets\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.187928 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/348d4ab5-2b83-4443-9024-8d7d3db346d7-kube-api-access\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.187959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-mountpoint-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.187988 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3884778-855d-4be3-aeab-4a9552ec10ac-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.188057 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-plugins-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.188219 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"certs\" (UniqueName: \"kubernetes.io/secret/183f2899-9fc9-48db-b0f6-7222c7364b50-certs\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.188258 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3884778-855d-4be3-aeab-4a9552ec10ac-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.188273 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qx9p9\" (UniqueName: \"kubernetes.io/projected/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-kube-api-access-qx9p9\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.188368 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3668af23-b087-479a-b9d8-d6e8b963ce57-ca-trust-extracted\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.189175 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"metrics-tls\" (UniqueName: \"kubernetes.io/secret/5e7fef31-4908-4580-a0b5-96af5af3dc55-metrics-tls\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.191232 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/348d4ab5-2b83-4443-9024-8d7d3db346d7-config\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.191584 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"mountpoint-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-mountpoint-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.191748 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3884778-855d-4be3-aeab-4a9552ec10ac-tuning-conf-dir\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.194733 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"plugins-dir\" (UniqueName: \"kubernetes.io/host-path/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-plugins-dir\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.196477 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3668af23-b087-479a-b9d8-d6e8b963ce57-ca-trust-extracted\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.197441 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/348d4ab5-2b83-4443-9024-8d7d3db346d7-serving-cert\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.197669 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3884778-855d-4be3-aeab-4a9552ec10ac-cni-sysctl-allowlist\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.198352 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/348d4ab5-2b83-4443-9024-8d7d3db346d7-tmp-dir\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.203349 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-tls\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.203545 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-trusted-ca\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.204285 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" podStartSLOduration=94.204271251 podStartE2EDuration="1m34.204271251s" podCreationTimestamp="2026-01-23 08:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:04.190210136 +0000 UTC m=+115.946335182" watchObservedRunningTime="2026-01-23 08:55:04.204271251 +0000 UTC m=+115.960396277" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.209515 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"certs\" (UniqueName: \"kubernetes.io/secret/183f2899-9fc9-48db-b0f6-7222c7364b50-certs\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.214831 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-bootstrap-token\" (UniqueName: \"kubernetes.io/secret/183f2899-9fc9-48db-b0f6-7222c7364b50-node-bootstrap-token\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.234664 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3668af23-b087-479a-b9d8-d6e8b963ce57-installation-pull-secrets\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.249386 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pgpkx\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-kube-api-access-pgpkx\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.255635 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.265544 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.273923 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58cl6\" (UniqueName: \"kubernetes.io/projected/183f2899-9fc9-48db-b0f6-7222c7364b50-kube-api-access-58cl6\") pod \"machine-config-server-6t2rf\" (UID: \"183f2899-9fc9-48db-b0f6-7222c7364b50\") " pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.286432 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.289591 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-b7pw6\" (UniqueName: \"kubernetes.io/projected/c3884778-855d-4be3-aeab-4a9552ec10ac-kube-api-access-b7pw6\") pod \"cni-sysctl-allowlist-ds-8vvn5\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.289849 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.290119 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.79010005 +0000 UTC m=+116.546225076 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.290294 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.290541 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.790534442 +0000 UTC m=+116.546659468 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.292706 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-bound-sa-token\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.293163 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns-operator/dns-operator-799b87ffcd-jjphf"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.300359 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.300550 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.309424 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.317048 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.327174 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ql4wp\" (UniqueName: \"kubernetes.io/projected/5e7fef31-4908-4580-a0b5-96af5af3dc55-kube-api-access-ql4wp\") pod \"dns-default-mlfmh\" (UID: \"5e7fef31-4908-4580-a0b5-96af5af3dc55\") " pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.328800 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.341682 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.345691 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/348d4ab5-2b83-4443-9024-8d7d3db346d7-kube-api-access\") pod \"kube-apiserver-operator-575994946d-z9gw4\" (UID: \"348d4ab5-2b83-4443-9024-8d7d3db346d7\") " pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.356918 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-console/downloads-747b44746d-lr6gt"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.357527 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qx9p9\" (UniqueName: \"kubernetes.io/projected/08c5724f-8e70-4d84-acf6-3036bcbc7f4a-kube-api-access-qx9p9\") pod \"csi-hostpathplugin-p5dvt\" (UID: \"08c5724f-8e70-4d84-acf6-3036bcbc7f4a\") " pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.358597 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl"] Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.392646 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.392818 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.892792681 +0000 UTC m=+116.648917707 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.393028 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.393519 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.89351066 +0000 UTC m=+116.649635686 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: W0123 08:55:04.422641 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda980473e_0fae_4e77_a49b_26eb41252303.slice/crio-90903a0e0df06a669fe1cea282a851fc731e185806a806e80ca5ab1dc6763d1f WatchSource:0}: Error finding container 90903a0e0df06a669fe1cea282a851fc731e185806a806e80ca5ab1dc6763d1f: Status 404 returned error can't find the container with id 90903a0e0df06a669fe1cea282a851fc731e185806a806e80ca5ab1dc6763d1f Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.422983 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.426229 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-machine-config-operator/machine-config-server-6t2rf" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.444859 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.455174 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-machine-approver/machine-approver-54c688565-grmnk" podStartSLOduration=95.455155517 podStartE2EDuration="1m35.455155517s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:04.452657319 +0000 UTC m=+116.208782345" watchObservedRunningTime="2026-01-23 08:55:04.455155517 +0000 UTC m=+116.211280533" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.455528 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.495823 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.496226 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.99619094 +0000 UTC m=+116.752315976 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.496384 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.496776 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:04.996768676 +0000 UTC m=+116.752893702 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.538075 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34846: no serving certificate available for the kubelet" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.597629 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.598318 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.098296245 +0000 UTC m=+116.854421271 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.612929 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" podStartSLOduration=95.612914105 podStartE2EDuration="1m35.612914105s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:04.611423074 +0000 UTC m=+116.367548100" watchObservedRunningTime="2026-01-23 08:55:04.612914105 +0000 UTC m=+116.369039121" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.701917 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.702241 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.202229049 +0000 UTC m=+116.958354075 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.807244 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.807910 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.307892391 +0000 UTC m=+117.064017417 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.808052 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication-operator/authentication-operator-7f5c659b84-45854" podStartSLOduration=95.807907062 podStartE2EDuration="1m35.807907062s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:04.800090968 +0000 UTC m=+116.556216004" watchObservedRunningTime="2026-01-23 08:55:04.807907062 +0000 UTC m=+116.564032088" Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.918308 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:04 crc kubenswrapper[5117]: E0123 08:55:04.918936 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.41892014 +0000 UTC m=+117.175045166 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:04 crc kubenswrapper[5117]: I0123 08:55:04.966528 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/machine-api-operator-755bb95488-gr9mz" podStartSLOduration=95.966501082 podStartE2EDuration="1m35.966501082s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:04.964671482 +0000 UTC m=+116.720796528" watchObservedRunningTime="2026-01-23 08:55:04.966501082 +0000 UTC m=+116.722626118" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.019695 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.020178 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.52014288 +0000 UTC m=+117.276267916 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.085250 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" event={"ID":"471904d8-f887-45d3-89e4-efd24d2f7ab1","Type":"ContainerStarted","Data":"437430a2841136d67fc3d0879db854dfbac71ce0722fce93b756367d69dd4289"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.090151 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" event={"ID":"5a2a6f4b-477c-495b-a191-95a213405e0d","Type":"ContainerStarted","Data":"cd310f2b7ae5d314634ff3908c97fcf4b981f8465d455396b167bf97a3071c5e"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.100964 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" event={"ID":"bf50dcce-4975-442b-980a-da2e9b937d0e","Type":"ContainerStarted","Data":"e34ca82665b6d7774a3d20a5e332c84b64222c425e14a354201d1e4f84ab33c9"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.102825 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6t2rf" event={"ID":"183f2899-9fc9-48db-b0f6-7222c7364b50","Type":"ContainerStarted","Data":"7a4422c7d5b6843356787b521f61c430e80ccf5957f2b1be0cafe1ad2b7f8350"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.106058 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" event={"ID":"bc9ea72b-c9a6-4fbc-873a-133709e45add","Type":"ContainerStarted","Data":"ee5936591a7d665e98e3eaa96eced368a8c54fd86e069342e35e13ba218d97fb"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.107146 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-lr6gt" event={"ID":"5c20ff45-85a7-456d-90ae-85845c4aec43","Type":"ContainerStarted","Data":"c23dbb914db4a13bf322b20f7de151b7c9df1a2468d4a6ac342aa4124a9c46a3"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.108462 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" event={"ID":"5fbb2477-dcc9-41a1-9f94-638266460dc2","Type":"ContainerStarted","Data":"052b4d17e5d990d6327d85b78bca1450f11cc9566a1467a3d9403fa69c332c19"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.109867 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.110779 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" event={"ID":"c3884778-855d-4be3-aeab-4a9552ec10ac","Type":"ContainerStarted","Data":"e125e9a3e6df5ba65342b918af8fc5fcb0081e4a01e6124d829a8cb3ad03e92c"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.112736 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" event={"ID":"6f3ffb03-c769-473c-8ab8-02d4d3faae5b","Type":"ContainerStarted","Data":"eb6da742de8a502b9668192b0a1b431d6c6adc3fcb88eb64a3fa6f3025e5ad82"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.118527 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" event={"ID":"a980473e-0fae-4e77-a49b-26eb41252303","Type":"ContainerStarted","Data":"90903a0e0df06a669fe1cea282a851fc731e185806a806e80ca5ab1dc6763d1f"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.122568 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.122986 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.622969224 +0000 UTC m=+117.379094250 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.123960 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-mkrsr" event={"ID":"bdb7201e-da3c-4c31-9cae-a139067e3a83","Type":"ContainerStarted","Data":"d264918689362b609808efdf1ee3b47aa7f8b3225e990c0c6b8b250b5541ecac"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.123990 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/console-64d44f6ddf-mkrsr" event={"ID":"bdb7201e-da3c-4c31-9cae-a139067e3a83","Type":"ContainerStarted","Data":"9a1af7b996de4564053a334793df4c5b09db6ed4452ad56a35e3a23ae30746da"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.130401 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" event={"ID":"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5","Type":"ContainerStarted","Data":"f07ba5934664bedd7088da91dde030f6cd72f0d694c418dcc9a20c232dfe80a8"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.132620 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" event={"ID":"047f668a-55a2-4033-bd74-8df8d6ffd36f","Type":"ContainerStarted","Data":"70751104d55c5506fb124c640424f684bef0add006d82923a87d6500ac66400f"} Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.223866 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.225004 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.724986206 +0000 UTC m=+117.481111232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.261403 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console-operator/console-operator-67c89758df-trv9s" podStartSLOduration=96.261383412 podStartE2EDuration="1m36.261383412s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:05.225770888 +0000 UTC m=+116.981895914" watchObservedRunningTime="2026-01-23 08:55:05.261383412 +0000 UTC m=+117.017508448" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.326031 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.331327 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.831309536 +0000 UTC m=+117.587434552 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.367748 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver-operator/openshift-apiserver-operator-846cbfc458-cs5tv" podStartSLOduration=96.367732263 podStartE2EDuration="1m36.367732263s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:05.367281231 +0000 UTC m=+117.123406257" watchObservedRunningTime="2026-01-23 08:55:05.367732263 +0000 UTC m=+117.123857279" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.438216 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.438575 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.938554031 +0000 UTC m=+117.694679047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.441819 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.442179 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:05.94216288 +0000 UTC m=+117.698287906 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.543571 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.544105 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.044072109 +0000 UTC m=+117.800197135 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.544238 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.544565 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.044555042 +0000 UTC m=+117.800680068 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.650070 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.650278 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.150256765 +0000 UTC m=+117.906381791 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.650693 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.651048 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.151032406 +0000 UTC m=+117.907157432 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.699206 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console-operator/console-operator-67c89758df-trv9s" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.754832 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.755207 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.255184647 +0000 UTC m=+118.011309673 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.769531 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" podStartSLOduration=96.76951325900001 podStartE2EDuration="1m36.769513259s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:05.768607724 +0000 UTC m=+117.524732760" watchObservedRunningTime="2026-01-23 08:55:05.769513259 +0000 UTC m=+117.525638285" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.813978 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l"] Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.852581 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/network-metrics-daemon-gcn4t" podStartSLOduration=96.852559372 podStartE2EDuration="1m36.852559372s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:05.848500381 +0000 UTC m=+117.604625407" watchObservedRunningTime="2026-01-23 08:55:05.852559372 +0000 UTC m=+117.608684408" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.862576 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.862969 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.362951256 +0000 UTC m=+118.119076282 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.888889 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.889192 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.889206 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv"] Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.906314 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75"] Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.906463 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr"] Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.909990 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" podStartSLOduration=96.909964503 podStartE2EDuration="1m36.909964503s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:05.900756101 +0000 UTC m=+117.656881117" watchObservedRunningTime="2026-01-23 08:55:05.909964503 +0000 UTC m=+117.666089539" Jan 23 08:55:05 crc kubenswrapper[5117]: W0123 08:55:05.927367 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb11d81e9_7489_4632_ab25_5a9fdc51a275.slice/crio-235fec57ed14056d88d3bb5983155886ef195b5104ce79f9e3640ccba02c7bd9 WatchSource:0}: Error finding container 235fec57ed14056d88d3bb5983155886ef195b5104ce79f9e3640ccba02c7bd9: Status 404 returned error can't find the container with id 235fec57ed14056d88d3bb5983155886ef195b5104ce79f9e3640ccba02c7bd9 Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.936710 5117 ???:1] "http: TLS handshake error from 192.168.126.11:34854: no serving certificate available for the kubelet" Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.965053 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.965542 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.465492403 +0000 UTC m=+118.221617429 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.982667 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:05 crc kubenswrapper[5117]: E0123 08:55:05.983120 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.483102585 +0000 UTC m=+118.239227611 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:05 crc kubenswrapper[5117]: I0123 08:55:05.975826 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.008613 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" podStartSLOduration=97.008572642 podStartE2EDuration="1m37.008572642s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:05.985540091 +0000 UTC m=+117.741665127" watchObservedRunningTime="2026-01-23 08:55:06.008572642 +0000 UTC m=+117.764697668" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.060352 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/console-64d44f6ddf-mkrsr" podStartSLOduration=97.060331918 podStartE2EDuration="1m37.060331918s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.012282903 +0000 UTC m=+117.768407929" watchObservedRunningTime="2026-01-23 08:55:06.060331918 +0000 UTC m=+117.816456944" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.061389 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.083702 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.084163 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-service-ca/service-ca-74545575db-mtbpp"] Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.084306 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.584285904 +0000 UTC m=+118.340410930 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.096822 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.133326 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.133372 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-canary/ingress-canary-98l56"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.156014 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" event={"ID":"047f668a-55a2-4033-bd74-8df8d6ffd36f","Type":"ContainerStarted","Data":"8b16e15dd38c0b32f4b8e9ee5332a535ce991b316ce0446782bf46b71a2da032"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.156095 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" event={"ID":"047f668a-55a2-4033-bd74-8df8d6ffd36f","Type":"ContainerStarted","Data":"1663dca94581474e0730fa2a7f08e9cab9b26420ae0d171c73315ac5e0d7b539"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.171791 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["hostpath-provisioner/csi-hostpathplugin-p5dvt"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.174411 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" event={"ID":"48769271-1362-41ce-a21c-ecf6d869aece","Type":"ContainerStarted","Data":"08702cffa245e22fe37f9cfe91325d5498ea240651d4bbab9bf135ab1ce98071"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.183973 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.187943 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.188264 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.688249809 +0000 UTC m=+118.444374835 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.191960 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" event={"ID":"4fbeda70-518a-438d-9cd2-3d6d01898eaa","Type":"ContainerStarted","Data":"4c5273d6fe347c2eb9034cbe723f6487e2d1e50f3e6a8d787b7c3922afc6a125"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.194885 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-multus/multus-admission-controller-69db94689b-z57tz"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.204269 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-cluster-samples-operator/cluster-samples-operator-6b564684c8-fvslw" podStartSLOduration=97.204248957 podStartE2EDuration="1m37.204248957s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.201916113 +0000 UTC m=+117.958041149" watchObservedRunningTime="2026-01-23 08:55:06.204248957 +0000 UTC m=+117.960373983" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.213049 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" event={"ID":"cd2d094e-247e-44e3-8732-bdf8582be0f4","Type":"ContainerStarted","Data":"7b3f4bc5948dca713e21fc535dcea089a380b0e8b3477fde8ba428077005c112"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.227119 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.266673 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" event={"ID":"471904d8-f887-45d3-89e4-efd24d2f7ab1","Type":"ContainerStarted","Data":"d1cbd88cb1ae7e1e27e5ca8ae915af24cbe6b93f765f89ad2b8adbf00302d34b"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.270253 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" podStartSLOduration=97.270231123 podStartE2EDuration="1m37.270231123s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.259590241 +0000 UTC m=+118.015715257" watchObservedRunningTime="2026-01-23 08:55:06.270231123 +0000 UTC m=+118.026356159" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.270797 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" event={"ID":"5a2a6f4b-477c-495b-a191-95a213405e0d","Type":"ContainerStarted","Data":"9c1590645d7b0e5aec5861ee5728786818dfc919c36543622eafe05ed2dcc1a4"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.278528 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-server-6t2rf" event={"ID":"183f2899-9fc9-48db-b0f6-7222c7364b50","Type":"ContainerStarted","Data":"c77f73a889ad52d36a74c952eb3e17c5351672ba066537b50971892b3479e0fe"} Jan 23 08:55:06 crc kubenswrapper[5117]: W0123 08:55:06.280677 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod32490300_66c7_458a_a8ab_7a434ac83d82.slice/crio-2db5cfce5e71963fb85e383b321e12c1176cc9b083d6fe30729c73e673014366 WatchSource:0}: Error finding container 2db5cfce5e71963fb85e383b321e12c1176cc9b083d6fe30729c73e673014366: Status 404 returned error can't find the container with id 2db5cfce5e71963fb85e383b321e12c1176cc9b083d6fe30729c73e673014366 Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.281222 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t67z9"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.289059 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.290411 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.790389454 +0000 UTC m=+118.546514480 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.293843 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-controller-manager-operator/kube-controller-manager-operator-69d5f845f8-8fjq9" podStartSLOduration=97.293820238 podStartE2EDuration="1m37.293820238s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.290041555 +0000 UTC m=+118.046166591" watchObservedRunningTime="2026-01-23 08:55:06.293820238 +0000 UTC m=+118.049945264" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.308014 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" event={"ID":"bc9ea72b-c9a6-4fbc-873a-133709e45add","Type":"ContainerStarted","Data":"7aad4c2b395b04a5869363907e7e4e2d06c0397f6e245557338109689cd44daf"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.311772 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.330411 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-console/downloads-747b44746d-lr6gt" event={"ID":"5c20ff45-85a7-456d-90ae-85845c4aec43","Type":"ContainerStarted","Data":"16c68a8046f997e78e4d74cec7cbfe4b23d6e13305d000fdcb513fcb29d69d8a"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.344081 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" event={"ID":"c6337181-3803-4b34-bd4d-ab52efa11902","Type":"ContainerStarted","Data":"4df59a43329672f5539447390f0391786d7802766fcb2197917fac58dd560428"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.351935 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager-operator/openshift-controller-manager-operator-686468bdd5-wvl5x" podStartSLOduration=97.351920478 podStartE2EDuration="1m37.351920478s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.35052392 +0000 UTC m=+118.106648956" watchObservedRunningTime="2026-01-23 08:55:06.351920478 +0000 UTC m=+118.108045504" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.356641 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" event={"ID":"7d6a6d13-d350-414a-8d19-266287758441","Type":"ContainerStarted","Data":"7b1cbe1e985dc78d643d7641174939295dce6d107d2ff86e1c25f6838462d971"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.368754 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.375524 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" event={"ID":"c3884778-855d-4be3-aeab-4a9552ec10ac","Type":"ContainerStarted","Data":"caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.377291 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.388607 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-lr6gt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.388703 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-lr6gt" podUID="5c20ff45-85a7-456d-90ae-85845c4aec43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.392062 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podStartSLOduration=97.392046017 podStartE2EDuration="1m37.392046017s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.390494544 +0000 UTC m=+118.146619590" watchObservedRunningTime="2026-01-23 08:55:06.392046017 +0000 UTC m=+118.148171053" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.404895 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.405374 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:06.905350011 +0000 UTC m=+118.661475037 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.407397 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" event={"ID":"6f3ffb03-c769-473c-8ab8-02d4d3faae5b","Type":"ContainerStarted","Data":"35ae9a6eae420291f655ca59886446c0cd88023a7341fde8aa3b46ff9c66f7f9"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.418392 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-server-6t2rf" podStartSLOduration=7.418367097 podStartE2EDuration="7.418367097s" podCreationTimestamp="2026-01-23 08:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.413563665 +0000 UTC m=+118.169688701" watchObservedRunningTime="2026-01-23 08:55:06.418367097 +0000 UTC m=+118.174492123" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.427024 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" event={"ID":"a980473e-0fae-4e77-a49b-26eb41252303","Type":"ContainerStarted","Data":"8cfea5fbcdf6d1293d4c1a33424d50471c260239b8b1f025cdcc719cb973bef8"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.428328 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" event={"ID":"b11d81e9-7489-4632-ab25-5a9fdc51a275","Type":"ContainerStarted","Data":"235fec57ed14056d88d3bb5983155886ef195b5104ce79f9e3640ccba02c7bd9"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.430076 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" event={"ID":"7eaf71a8-e4ea-4a47-a92b-4d6d34bde9a5","Type":"ContainerStarted","Data":"81978773174bff547e7dba33935d6d4de3e33f26075b6f7eeb7c39279ae641a0"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.433005 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" event={"ID":"28b236a1-f9c2-4e4d-9faa-8ed9919a1976","Type":"ContainerStarted","Data":"ed24054e95571a34201eefe918bcf83c7ce88b4aa159156945a68312f3fb4b1d"} Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.453450 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.461075 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-dns/dns-default-mlfmh"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.472204 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.478943 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-console/downloads-747b44746d-lr6gt" podStartSLOduration=97.478915073 podStartE2EDuration="1m37.478915073s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.454064094 +0000 UTC m=+118.210189120" watchObservedRunningTime="2026-01-23 08:55:06.478915073 +0000 UTC m=+118.235040099" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.483297 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4"] Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.501397 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" podStartSLOduration=7.501374738 podStartE2EDuration="7.501374738s" podCreationTimestamp="2026-01-23 08:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.493269516 +0000 UTC m=+118.249394542" watchObservedRunningTime="2026-01-23 08:55:06.501374738 +0000 UTC m=+118.257499764" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.506646 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.507108 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.007085214 +0000 UTC m=+118.763210240 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.507705 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.512023 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.012001968 +0000 UTC m=+118.768126994 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.536741 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator-operator/kube-storage-version-migrator-operator-565b79b866-w9zxf" podStartSLOduration=97.536725625 podStartE2EDuration="1m37.536725625s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.535246095 +0000 UTC m=+118.291371131" watchObservedRunningTime="2026-01-23 08:55:06.536725625 +0000 UTC m=+118.292850651" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.549116 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.592750 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/cluster-image-registry-operator-86c45576b9-qhnxl" podStartSLOduration=97.592728968 podStartE2EDuration="1m37.592728968s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:06.565674147 +0000 UTC m=+118.321799173" watchObservedRunningTime="2026-01-23 08:55:06.592728968 +0000 UTC m=+118.348853994" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.607001 5117 patch_prober.go:28] interesting pod/apiserver-9ddfb9f55-p4kn6 container/openshift-apiserver namespace/openshift-apiserver: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[+]ping ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]log ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]etcd ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/start-apiserver-admission-initializer ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/generic-apiserver-start-informers ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/max-in-flight-filter ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/storage-object-count-tracker-hook ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/image.openshift.io-apiserver-caches ok Jan 23 08:55:06 crc kubenswrapper[5117]: [-]poststarthook/authorization.openshift.io-bootstrapclusterroles failed: reason withheld Jan 23 08:55:06 crc kubenswrapper[5117]: [-]poststarthook/authorization.openshift.io-ensurenodebootstrap-sa failed: reason withheld Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/project.openshift.io-projectcache ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/project.openshift.io-projectauthorizationcache ok Jan 23 08:55:06 crc kubenswrapper[5117]: [-]poststarthook/openshift.io-startinformers failed: reason withheld Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/openshift.io-restmapperupdater ok Jan 23 08:55:06 crc kubenswrapper[5117]: [+]poststarthook/quota.openshift.io-clusterquotamapping ok Jan 23 08:55:06 crc kubenswrapper[5117]: livez check failed Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.607062 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" podUID="bf50dcce-4975-442b-980a-da2e9b937d0e" containerName="openshift-apiserver" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.610727 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.611020 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.111002098 +0000 UTC m=+118.867127124 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.718589 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.718987 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.218975653 +0000 UTC m=+118.975100679 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.822226 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.822505 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.322474206 +0000 UTC m=+119.078599232 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.822959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.823404 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.323389231 +0000 UTC m=+119.079514257 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.925014 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.925222 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.425192307 +0000 UTC m=+119.181317343 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:06 crc kubenswrapper[5117]: I0123 08:55:06.925458 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:06 crc kubenswrapper[5117]: E0123 08:55:06.925871 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.425863265 +0000 UTC m=+119.181988291 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.026718 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.027295 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.527277021 +0000 UTC m=+119.283402047 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.115711 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-8vvn5"] Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.128262 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.128701 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.628684636 +0000 UTC m=+119.384809662 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.229462 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.229791 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.729776063 +0000 UTC m=+119.485901089 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.301520 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.306370 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:07 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:07 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:07 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.306440 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.330701 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.331010 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.830998613 +0000 UTC m=+119.587123639 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.438902 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.439066 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.93903258 +0000 UTC m=+119.695157606 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.439621 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.440183 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:07.940165911 +0000 UTC m=+119.696290937 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.472103 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-98l56" event={"ID":"f40f0985-7ebd-4874-a0c7-d9b896588b6b","Type":"ContainerStarted","Data":"d17379c68b716e2548dcc131d0da8f90b67b058cb21ef2f216ef774d4ffb791e"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.472169 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-canary/ingress-canary-98l56" event={"ID":"f40f0985-7ebd-4874-a0c7-d9b896588b6b","Type":"ContainerStarted","Data":"f0fd1013d6d42ed9011f6b648e4b79efbba8071a8c5c393ba64f073c9e8078dc"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.480868 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" event={"ID":"32490300-66c7-458a-a8ab-7a434ac83d82","Type":"ContainerStarted","Data":"c9d4243181ebbf18a8bd865b541f7ffea87fd8d65c11b0defcd64130f39bc9e0"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.480917 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" event={"ID":"32490300-66c7-458a-a8ab-7a434ac83d82","Type":"ContainerStarted","Data":"2db5cfce5e71963fb85e383b321e12c1176cc9b083d6fe30729c73e673014366"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.481851 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.483445 5117 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-cmk6w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" start-of-body= Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.484350 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" podUID="32490300-66c7-458a-a8ab-7a434ac83d82" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": dial tcp 10.217.0.41:5443: connect: connection refused" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.487758 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" event={"ID":"6894ed64-8531-4aaa-81ac-aed462d5fdb7","Type":"ContainerStarted","Data":"4368d077d3f15cb9fa1cfe340abef3b028209f05b457232b8a1ddb72788c95d7"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.490602 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" event={"ID":"c6337181-3803-4b34-bd4d-ab52efa11902","Type":"ContainerStarted","Data":"0926ac137ac9f2526314851e673ee36b1c5ecf549ef8720747aa369d10a6a2e4"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.491243 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.494490 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-canary/ingress-canary-98l56" podStartSLOduration=8.494473667 podStartE2EDuration="8.494473667s" podCreationTimestamp="2026-01-23 08:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.49237807 +0000 UTC m=+119.248503096" watchObservedRunningTime="2026-01-23 08:55:07.494473667 +0000 UTC m=+119.250598693" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.515249 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" event={"ID":"28b236a1-f9c2-4e4d-9faa-8ed9919a1976","Type":"ContainerStarted","Data":"42414fefd52217923376ef9cb0e1c64719e08790b3daaae15621e2408460d003"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.515931 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.524844 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" podStartSLOduration=98.524815857 podStartE2EDuration="1m38.524815857s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.524004795 +0000 UTC m=+119.280129821" watchObservedRunningTime="2026-01-23 08:55:07.524815857 +0000 UTC m=+119.280940883" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.544967 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.546068 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.046048998 +0000 UTC m=+119.802174024 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.549805 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/catalog-operator-75ff9f647d-cx7rb" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.551061 5117 patch_prober.go:28] interesting pod/olm-operator-5cdf44d969-vm6rr container/olm-operator namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" start-of-body= Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.551154 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" podUID="28b236a1-f9c2-4e4d-9faa-8ed9919a1976" containerName="olm-operator" probeResult="failure" output="Get \"https://10.217.0.22:8443/healthz\": dial tcp 10.217.0.22:8443: connect: connection refused" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.553781 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" podStartSLOduration=98.55377006 podStartE2EDuration="1m38.55377006s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.552576117 +0000 UTC m=+119.308701163" watchObservedRunningTime="2026-01-23 08:55:07.55377006 +0000 UTC m=+119.309895086" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.554609 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" event={"ID":"4356887a-acbc-4cf4-8fe3-4cea5a46a05a","Type":"ContainerStarted","Data":"0b6d0bf79ea49113a01775de7730dd40e8ea64bfb0380fe133858f3e04ba465c"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.554669 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" event={"ID":"4356887a-acbc-4cf4-8fe3-4cea5a46a05a","Type":"ContainerStarted","Data":"410d9d075a4b036a08ac7fc8d35b42b996523f1a7fb02175db809b77e695e1e9"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.556121 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" event={"ID":"d73c08bd-5e36-4e8e-9edf-0144d953a131","Type":"ContainerStarted","Data":"62b60b3442be0d17ba4285001c1310574ad19c898fe17a020a05277f6a71ae7d"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.571587 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" event={"ID":"a980473e-0fae-4e77-a49b-26eb41252303","Type":"ContainerStarted","Data":"b32f03c776e6f46d89e6b6df5e2159e20e41e5f30d827b9adfdfccf2d0c48ef1"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.574410 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-mtbpp" event={"ID":"3e2b1743-33bf-4099-b61b-444fe484becf","Type":"ContainerStarted","Data":"714f5a1d54b398936cc9597896ab72dafb84da124dbd7b2aea30dd2fea388306"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.585537 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" event={"ID":"348d4ab5-2b83-4443-9024-8d7d3db346d7","Type":"ContainerStarted","Data":"8a073c927ce0a31c22d571d3a7405726f3f9eefd67c28ca9ca1e3db7a5aa6932"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.640331 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" event={"ID":"48769271-1362-41ce-a21c-ecf6d869aece","Type":"ContainerStarted","Data":"a4a1b8c412c52b29a6ea5ea88cf936a8e816374ac4be884a05d182adccfc16b9"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.651070 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.652871 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" podStartSLOduration=98.652854372 podStartE2EDuration="1m38.652854372s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.611185031 +0000 UTC m=+119.367310067" watchObservedRunningTime="2026-01-23 08:55:07.652854372 +0000 UTC m=+119.408979418" Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.653336 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.153318694 +0000 UTC m=+119.909443720 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.680603 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns-operator/dns-operator-799b87ffcd-jjphf" podStartSLOduration=98.68057883 podStartE2EDuration="1m38.68057883s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.652364928 +0000 UTC m=+119.408489964" watchObservedRunningTime="2026-01-23 08:55:07.68057883 +0000 UTC m=+119.436703856" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.681439 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-api/control-plane-machine-set-operator-75ffdb6fcd-v4h4l" podStartSLOduration=98.681433204 podStartE2EDuration="1m38.681433204s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.680860448 +0000 UTC m=+119.436985484" watchObservedRunningTime="2026-01-23 08:55:07.681433204 +0000 UTC m=+119.437558230" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.713566 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" event={"ID":"c818e404-5db9-4c23-893d-1cd602c404aa","Type":"ContainerStarted","Data":"a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.713619 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" event={"ID":"c818e404-5db9-4c23-893d-1cd602c404aa","Type":"ContainerStarted","Data":"674de7850eb997754f71894a1bb1e386e01c9c7fa57a32236978c8ddcfe880ce"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.715066 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.716355 5117 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-t67z9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.716407 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.738003 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" podStartSLOduration=98.737988772 podStartE2EDuration="1m38.737988772s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.735912965 +0000 UTC m=+119.492037991" watchObservedRunningTime="2026-01-23 08:55:07.737988772 +0000 UTC m=+119.494113798" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.751788 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" event={"ID":"cd2d094e-247e-44e3-8732-bdf8582be0f4","Type":"ContainerStarted","Data":"587a92ba16b3607cfc37594cf1332842d9cb3334001c231e50ba85647842c505"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.754151 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.754469 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.254444772 +0000 UTC m=+120.010569798 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.756464 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.756874 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.256860048 +0000 UTC m=+120.012985074 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.762768 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" event={"ID":"04eba658-5e10-4d58-9238-75ce437d7bec","Type":"ContainerStarted","Data":"db64a9fa3799eb9c92c6549c50a782e342474f0ff3a40055bac1bc466e437cfe"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.762821 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" event={"ID":"04eba658-5e10-4d58-9238-75ce437d7bec","Type":"ContainerStarted","Data":"d2120a97cb7db55bb8e6e0d90e9f052092b20c434568c5524c593babb403b964"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.779309 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.793632 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca-operator/service-ca-operator-5b9c976747-p7sx2" podStartSLOduration=97.793609884 podStartE2EDuration="1m37.793609884s" podCreationTimestamp="2026-01-23 08:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.780994648 +0000 UTC m=+119.537119684" watchObservedRunningTime="2026-01-23 08:55:07.793609884 +0000 UTC m=+119.549734910" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.819501 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" event={"ID":"1609f854-4b62-4513-8fda-097c55c22a43","Type":"ContainerStarted","Data":"4d508725c59ccf3784ec669fdbfb6fd14d55537d350379693000f5e4399ea833"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.819567 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" event={"ID":"1609f854-4b62-4513-8fda-097c55c22a43","Type":"ContainerStarted","Data":"292e64e82cecb99ac686b1fc846642e9e33e13aa09fd37515ddd046c701ddfb7"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.832590 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" event={"ID":"7d6a6d13-d350-414a-8d19-266287758441","Type":"ContainerStarted","Data":"a692fbdb55926d328ca4c984a1d951ece2f6b913fdd3d73c048df8dad4aff6c2"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.845235 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" event={"ID":"f2500020-ee51-4792-a6d0-ca4c0f0fdec4","Type":"ContainerStarted","Data":"78b31edebaa08d7736d66bee31568d1b91cab2fee33aa11a1774cf0cc0153091"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.846780 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mlfmh" event={"ID":"5e7fef31-4908-4580-a0b5-96af5af3dc55","Type":"ContainerStarted","Data":"f46521deb7bc6cc216a09260bcdc1199d7e275fc285d5fb51315160e056ff1c1"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.848194 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" event={"ID":"b11d81e9-7489-4632-ab25-5a9fdc51a275","Type":"ContainerStarted","Data":"4187dce5e358caae88bab65b1a8970e7ce1825016ca44bc311aa3aca0ee1b825"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.848755 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.857964 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.858334 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.358310274 +0000 UTC m=+120.114435300 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.859660 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.859929 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" event={"ID":"08c5724f-8e70-4d84-acf6-3036bcbc7f4a","Type":"ContainerStarted","Data":"ca9e07e7c19231a47f13d241f54822cba6b850d245e57dcf519d55e277e1eb37"} Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.860375 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.360359131 +0000 UTC m=+120.116484157 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.892426 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-etcd-operator/etcd-operator-69b85846b6-nkrmv" podStartSLOduration=98.892403288 podStartE2EDuration="1m38.892403288s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.867680391 +0000 UTC m=+119.623805417" watchObservedRunningTime="2026-01-23 08:55:07.892403288 +0000 UTC m=+119.648528314" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.917861 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" podStartSLOduration=98.917842324 podStartE2EDuration="1m38.917842324s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.895934754 +0000 UTC m=+119.652059780" watchObservedRunningTime="2026-01-23 08:55:07.917842324 +0000 UTC m=+119.673967350" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.921986 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" event={"ID":"cf7dbe39-9107-49a2-ac48-8264fe632e4d","Type":"ContainerStarted","Data":"439d3be2db5cdcd79643baa9a2233834dca84a5c3e2ba4e3fcd1c358612e6acc"} Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.923320 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-lr6gt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.923383 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-lr6gt" podUID="5c20ff45-85a7-456d-90ae-85845c4aec43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.960480 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.961851 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.461831718 +0000 UTC m=+120.217956744 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.966422 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:07 crc kubenswrapper[5117]: E0123 08:55:07.966733 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.466717691 +0000 UTC m=+120.222842717 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:07 crc kubenswrapper[5117]: I0123 08:55:07.971076 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-config-operator/openshift-config-operator-5777786469-plxfl" Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.017561 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" podStartSLOduration=99.017536632 podStartE2EDuration="1m39.017536632s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:07.916709633 +0000 UTC m=+119.672834659" watchObservedRunningTime="2026-01-23 08:55:08.017536632 +0000 UTC m=+119.773661658" Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.081492 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.081866 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.581846382 +0000 UTC m=+120.337971408 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.183907 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.184371 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.684355098 +0000 UTC m=+120.440480124 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.288803 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.288958 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.78893728 +0000 UTC m=+120.545062306 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.289081 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.289475 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.789464474 +0000 UTC m=+120.545589500 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.304177 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:08 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:08 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:08 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.304247 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.395738 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.396068 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.896049461 +0000 UTC m=+120.652174487 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.498087 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.498514 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:08.998495395 +0000 UTC m=+120.754620421 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.600710 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.600867 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.100844266 +0000 UTC m=+120.856969292 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.601312 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.601637 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.101626948 +0000 UTC m=+120.857751974 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.649039 5117 ???:1] "http: TLS handshake error from 192.168.126.11:40328: no serving certificate available for the kubelet" Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.702347 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.702849 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.202829007 +0000 UTC m=+120.958954033 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.804005 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.804503 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.30448792 +0000 UTC m=+121.060612946 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.905842 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:08 crc kubenswrapper[5117]: E0123 08:55:08.906222 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.406202603 +0000 UTC m=+121.162327639 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:08 crc kubenswrapper[5117]: I0123 08:55:08.995618 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" event={"ID":"4356887a-acbc-4cf4-8fe3-4cea5a46a05a","Type":"ContainerStarted","Data":"f97fcde5b337ba8e3649c51d5b0481ab812a48f4cef0f2481d44afdd55409459"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.007217 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.007673 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.50765808 +0000 UTC m=+121.263783116 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.017121 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" event={"ID":"d73c08bd-5e36-4e8e-9edf-0144d953a131","Type":"ContainerStarted","Data":"bb50ae55749e9f9a3af4c96ae5f329e357462011e67fcba20704885a8520d14a"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.024953 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-service-ca/service-ca-74545575db-mtbpp" event={"ID":"3e2b1743-33bf-4099-b61b-444fe484becf","Type":"ContainerStarted","Data":"1299cb2014d444505686d45c6b79e2987a486b750e4b65b5d478c4268a4fee9b"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.026829 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" event={"ID":"348d4ab5-2b83-4443-9024-8d7d3db346d7","Type":"ContainerStarted","Data":"9dcbdfda69d1bcc89ebbf6d6ed28b70f0ab4024e58a970fed9465612fa25858a"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.039893 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cgm87"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.065151 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" event={"ID":"04eba658-5e10-4d58-9238-75ce437d7bec","Type":"ContainerStarted","Data":"0ac18de716d6870632912d4d4cfc91ba97ef3f8e64af30bd27de8e5a75fbda9f"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.065201 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cgm87"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.065332 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.067850 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.069507 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" event={"ID":"1609f854-4b62-4513-8fda-097c55c22a43","Type":"ContainerStarted","Data":"fe424f6a91e919532e3210cd5a8da9cce510f9c35cddb3e931af44ca7317a584"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.109854 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.111521 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.611503982 +0000 UTC m=+121.367629008 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.127720 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" event={"ID":"f2500020-ee51-4792-a6d0-ca4c0f0fdec4","Type":"ContainerStarted","Data":"0f21a485b03f8eda6772faca425ad22d93102675a3c10c63223bd2cd5eac91c5"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.150147 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mlfmh" event={"ID":"5e7fef31-4908-4580-a0b5-96af5af3dc55","Type":"ContainerStarted","Data":"309aa8ffb50057dc6aac5e076e0829c041c6375d036d83288beaf114b7dcf0e4"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.150189 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-dns/dns-default-mlfmh" event={"ID":"5e7fef31-4908-4580-a0b5-96af5af3dc55","Type":"ContainerStarted","Data":"3abefc21a3e26da91c1ed4655bac2dfefd340956d377cfa21d45e596ab26f51a"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.151040 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.153256 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" event={"ID":"b11d81e9-7489-4632-ab25-5a9fdc51a275","Type":"ContainerStarted","Data":"f4b59f76dfd5027c5878b3b9b3076ac6b2298e4a96ff933ea6e9183b34bcffb9"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.174049 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" event={"ID":"cf7dbe39-9107-49a2-ac48-8264fe632e4d","Type":"ContainerStarted","Data":"66a03673a30e6ebe6c752a9109f37f01747d42886797350a71d14d9f3a13f5e0"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.174096 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" event={"ID":"cf7dbe39-9107-49a2-ac48-8264fe632e4d","Type":"ContainerStarted","Data":"57c57a29e89adf4ace58dee242c3c8676ff6a73db3e1019f3aae3dec47a6a1ea"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.188120 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" event={"ID":"6894ed64-8531-4aaa-81ac-aed462d5fdb7","Type":"ContainerStarted","Data":"988f5d7efb1f1dfecdf27f763ac4ce3564c639e17977870a2a3baad31d390373"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.188192 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" event={"ID":"6894ed64-8531-4aaa-81ac-aed462d5fdb7","Type":"ContainerStarted","Data":"557b7e414b4016728c744008c84b074c2857e4cbc633cc2c6b4cea0dedbe36f8"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.191498 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.194709 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"3a14caf222afb62aaabdc47808b6f944","Type":"ContainerStarted","Data":"2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b"} Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.195439 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.196813 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-lr6gt container/download-server namespace/openshift-console: Readiness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.196862 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-console/downloads-747b44746d-lr6gt" podUID="5c20ff45-85a7-456d-90ae-85845c4aec43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.201389 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" containerID="cri-o://caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" gracePeriod=30 Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.204661 5117 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-t67z9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.204746 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.213056 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-catalog-content\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.213325 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.213412 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-2jrvk\" (UniqueName: \"kubernetes.io/projected/aace2d11-7f7c-464c-b258-c61edb938e83-kube-api-access-2jrvk\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.213451 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-utilities\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.214955 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.714937433 +0000 UTC m=+121.471062459 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.215680 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/olm-operator-5cdf44d969-vm6rr" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.231581 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-7qf4s"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.261758 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.281379 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7qf4s"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.283493 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.303482 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:09 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:09 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:09 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.303572 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.316289 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.317011 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-catalog-content\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.317149 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sttx4\" (UniqueName: \"kubernetes.io/projected/32b82517-9793-44f5-bc31-05ba0d27c553-kube-api-access-sttx4\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.317208 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-2jrvk\" (UniqueName: \"kubernetes.io/projected/aace2d11-7f7c-464c-b258-c61edb938e83-kube-api-access-2jrvk\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.317290 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-utilities\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.317485 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-catalog-content\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.317530 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-utilities\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.318555 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.818527058 +0000 UTC m=+121.574652084 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.321248 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-catalog-content\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.322087 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-utilities\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.370107 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-2jrvk\" (UniqueName: \"kubernetes.io/projected/aace2d11-7f7c-464c-b258-c61edb938e83-kube-api-access-2jrvk\") pod \"certified-operators-cgm87\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.385872 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=25.385855371 podStartE2EDuration="25.385855371s" podCreationTimestamp="2026-01-23 08:54:44 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.383672261 +0000 UTC m=+121.139797297" watchObservedRunningTime="2026-01-23 08:55:09.385855371 +0000 UTC m=+121.141980397" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.412100 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-cz52g"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.424639 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.424711 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-catalog-content\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.424746 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sttx4\" (UniqueName: \"kubernetes.io/projected/32b82517-9793-44f5-bc31-05ba0d27c553-kube-api-access-sttx4\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.424829 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-utilities\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.425515 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-utilities\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.425938 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:09.925920987 +0000 UTC m=+121.682046013 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.426490 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.426587 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-catalog-content\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.434465 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-dns/dns-default-mlfmh" podStartSLOduration=10.43443563 podStartE2EDuration="10.43443563s" podCreationTimestamp="2026-01-23 08:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.424682603 +0000 UTC m=+121.180807629" watchObservedRunningTime="2026-01-23 08:55:09.43443563 +0000 UTC m=+121.190560666" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.449001 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.457060 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cz52g"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.490436 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sttx4\" (UniqueName: \"kubernetes.io/projected/32b82517-9793-44f5-bc31-05ba0d27c553-kube-api-access-sttx4\") pod \"community-operators-7qf4s\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.500543 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-controller-f9cdd68f7-snhxl" podStartSLOduration=100.500490658 podStartE2EDuration="1m40.500490658s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.494584246 +0000 UTC m=+121.250709272" watchObservedRunningTime="2026-01-23 08:55:09.500490658 +0000 UTC m=+121.256615684" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.538549 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.538902 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-utilities\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.539035 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-catalog-content\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.539216 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r2v64\" (UniqueName: \"kubernetes.io/projected/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-kube-api-access-r2v64\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.539429 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.039407953 +0000 UTC m=+121.795532979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.595797 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.604954 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler-operator/openshift-kube-scheduler-operator-54f497555d-nt8cs" podStartSLOduration=100.604935746 podStartE2EDuration="1m40.604935746s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.53273382 +0000 UTC m=+121.288858856" watchObservedRunningTime="2026-01-23 08:55:09.604935746 +0000 UTC m=+121.361060772" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.614207 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-k5dxg"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.626652 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-machine-config-operator/machine-config-operator-67c9d58cbb-h4kwm" podStartSLOduration=100.62663069 podStartE2EDuration="1m40.62663069s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.603701293 +0000 UTC m=+121.359826329" watchObservedRunningTime="2026-01-23 08:55:09.62663069 +0000 UTC m=+121.382755716" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.632416 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.636184 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5dxg"] Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.641272 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r2v64\" (UniqueName: \"kubernetes.io/projected/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-kube-api-access-r2v64\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.641365 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-utilities\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.641469 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.641496 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-catalog-content\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.641849 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-utilities\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.642063 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.142046652 +0000 UTC m=+121.898171768 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.647518 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-catalog-content\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.656734 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ingress-operator/ingress-operator-6b9cb4dbcf-7jjtt" podStartSLOduration=100.656716483 podStartE2EDuration="1m40.656716483s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.6551201 +0000 UTC m=+121.411245136" watchObservedRunningTime="2026-01-23 08:55:09.656716483 +0000 UTC m=+121.412841509" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.699404 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r2v64\" (UniqueName: \"kubernetes.io/projected/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-kube-api-access-r2v64\") pod \"certified-operators-cz52g\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.747781 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.747971 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-utilities\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.748176 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.248152946 +0000 UTC m=+122.004277982 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.748362 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-catalog-content\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.748471 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cz89r\" (UniqueName: \"kubernetes.io/projected/853c0f27-c63d-47ba-844c-e0ae7a71b079-kube-api-access-cz89r\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.753457 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-storage-version-migrator/migrator-866fcbc849-h2827" podStartSLOduration=100.75343397 podStartE2EDuration="1m40.75343397s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.696589065 +0000 UTC m=+121.452714101" watchObservedRunningTime="2026-01-23 08:55:09.75343397 +0000 UTC m=+121.509559016" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.754670 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver-operator/kube-apiserver-operator-575994946d-z9gw4" podStartSLOduration=100.754650714 podStartE2EDuration="1m40.754650714s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.754478429 +0000 UTC m=+121.510603475" watchObservedRunningTime="2026-01-23 08:55:09.754650714 +0000 UTC m=+121.510775740" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.816531 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-multus/multus-admission-controller-69db94689b-z57tz" podStartSLOduration=100.816510377 podStartE2EDuration="1m40.816510377s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.815592082 +0000 UTC m=+121.571717108" watchObservedRunningTime="2026-01-23 08:55:09.816510377 +0000 UTC m=+121.572635413" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.837715 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.861567 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-utilities\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.861723 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.861783 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-catalog-content\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.861810 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cz89r\" (UniqueName: \"kubernetes.io/projected/853c0f27-c63d-47ba-844c-e0ae7a71b079-kube-api-access-cz89r\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.862553 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-utilities\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.862837 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.362822654 +0000 UTC m=+122.118947680 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.863252 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-catalog-content\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.906008 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cz89r\" (UniqueName: \"kubernetes.io/projected/853c0f27-c63d-47ba-844c-e0ae7a71b079-kube-api-access-cz89r\") pod \"community-operators-k5dxg\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.939761 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-service-ca/service-ca-74545575db-mtbpp" podStartSLOduration=99.939743939 podStartE2EDuration="1m39.939743939s" podCreationTimestamp="2026-01-23 08:53:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:09.939537524 +0000 UTC m=+121.695662560" watchObservedRunningTime="2026-01-23 08:55:09.939743939 +0000 UTC m=+121.695868965" Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.969713 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:09 crc kubenswrapper[5117]: E0123 08:55:09.970214 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.470193783 +0000 UTC m=+122.226318809 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:09 crc kubenswrapper[5117]: I0123 08:55:09.994723 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.073953 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.074447 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.574431405 +0000 UTC m=+122.330556431 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.175487 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.175920 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.675891991 +0000 UTC m=+122.432017017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.176282 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.176878 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.676868018 +0000 UTC m=+122.432993044 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.198051 5117 patch_prober.go:28] interesting pod/packageserver-7d4fc7d867-cmk6w container/packageserver namespace/openshift-operator-lifecycle-manager: Readiness probe status=failure output="Get \"https://10.217.0.41:5443/healthz\": context deadline exceeded" start-of-body= Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.198120 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" podUID="32490300-66c7-458a-a8ab-7a434ac83d82" containerName="packageserver" probeResult="failure" output="Get \"https://10.217.0.41:5443/healthz\": context deadline exceeded" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.211977 5117 patch_prober.go:28] interesting pod/marketplace-operator-547dbd544d-t67z9 container/marketplace-operator namespace/openshift-marketplace: Readiness probe status=failure output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" start-of-body= Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.212024 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" probeResult="failure" output="Get \"http://10.217.0.31:8080/healthz\": dial tcp 10.217.0.31:8080: connect: connection refused" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.259355 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/packageserver-7d4fc7d867-cmk6w" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.279067 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.279720 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.779647991 +0000 UTC m=+122.535773017 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.305328 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:10 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:10 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:10 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.305403 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.322252 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-7qf4s"] Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.374480 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cgm87"] Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.382167 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.384255 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.884238013 +0000 UTC m=+122.640363049 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.488275 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.488647 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:10.988631921 +0000 UTC m=+122.744756947 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.578544 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-cz52g"] Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.590190 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.590590 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.090576031 +0000 UTC m=+122.846701057 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: W0123 08:55:10.605045 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poddea7fdf2_1eec_4ae5_972b_2dc333b46e24.slice/crio-18a45e4a8d81452c06de309d3ca67a6cae5a11074c5515241e64a0334e7e1935 WatchSource:0}: Error finding container 18a45e4a8d81452c06de309d3ca67a6cae5a11074c5515241e64a0334e7e1935: Status 404 returned error can't find the container with id 18a45e4a8d81452c06de309d3ca67a6cae5a11074c5515241e64a0334e7e1935 Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.692899 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.693395 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.193373994 +0000 UTC m=+122.949499020 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.797114 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.797472 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.297456982 +0000 UTC m=+123.053582008 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.834200 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-k5dxg"] Jan 23 08:55:10 crc kubenswrapper[5117]: W0123 08:55:10.884886 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod853c0f27_c63d_47ba_844c_e0ae7a71b079.slice/crio-029205d2f175976cd0271093e3922e6037e61a4f7b2cb6a46e42d944ab050bf1 WatchSource:0}: Error finding container 029205d2f175976cd0271093e3922e6037e61a4f7b2cb6a46e42d944ab050bf1: Status 404 returned error can't find the container with id 029205d2f175976cd0271093e3922e6037e61a4f7b2cb6a46e42d944ab050bf1 Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.891229 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.897854 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.898150 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.398117167 +0000 UTC m=+123.154242193 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.972348 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.972403 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.981659 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:10 crc kubenswrapper[5117]: I0123 08:55:10.999216 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:10 crc kubenswrapper[5117]: E0123 08:55:10.999584 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.499572274 +0000 UTC m=+123.255697300 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.008280 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-v2wfd"] Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.100945 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.101196 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.601167714 +0000 UTC m=+123.357292750 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.101551 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.102018 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.601985377 +0000 UTC m=+123.358110403 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.203109 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.203343 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.70332175 +0000 UTC m=+123.459446776 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.203686 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.204022 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.704012499 +0000 UTC m=+123.460137525 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.216989 5117 generic.go:358] "Generic (PLEG): container finished" podID="32b82517-9793-44f5-bc31-05ba0d27c553" containerID="912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac" exitCode=0 Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.220044 5117 generic.go:358] "Generic (PLEG): container finished" podID="aace2d11-7f7c-464c-b258-c61edb938e83" containerID="5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb" exitCode=0 Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227118 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2wfd"] Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227176 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qf4s" event={"ID":"32b82517-9793-44f5-bc31-05ba0d27c553","Type":"ContainerDied","Data":"912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac"} Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227214 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qf4s" event={"ID":"32b82517-9793-44f5-bc31-05ba0d27c553","Type":"ContainerStarted","Data":"a7573f271336096a2d1b9c11c0632ed3fffed5ed7a23f5a95db9d32c29407bc4"} Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227232 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5dxg" event={"ID":"853c0f27-c63d-47ba-844c-e0ae7a71b079","Type":"ContainerStarted","Data":"029205d2f175976cd0271093e3922e6037e61a4f7b2cb6a46e42d944ab050bf1"} Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227247 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgm87" event={"ID":"aace2d11-7f7c-464c-b258-c61edb938e83","Type":"ContainerDied","Data":"5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb"} Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227261 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgm87" event={"ID":"aace2d11-7f7c-464c-b258-c61edb938e83","Type":"ContainerStarted","Data":"bf821ca60582250467c9b8bfd4667a7d3ae2e366a44c13543764cfd92d036c9a"} Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227272 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz52g" event={"ID":"dea7fdf2-1eec-4ae5-972b-2dc333b46e24","Type":"ContainerStarted","Data":"18a45e4a8d81452c06de309d3ca67a6cae5a11074c5515241e64a0334e7e1935"} Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.227846 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.233109 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.284120 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-oauth-apiserver/apiserver-8596bd845d-24k6c" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.287405 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-apiserver/apiserver-9ddfb9f55-p4kn6" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.304894 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.305252 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.805220089 +0000 UTC m=+123.561345125 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.312376 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:11 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:11 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:11 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.312462 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.406392 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-utilities\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.406616 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-catalog-content\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.406901 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.406944 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8dxjj\" (UniqueName: \"kubernetes.io/projected/861a8fa4-2b12-475a-819b-74238f4d1a60-kube-api-access-8dxjj\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.408374 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:11.908357762 +0000 UTC m=+123.664482878 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.428205 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-qlr9s"] Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.507812 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.508415 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8dxjj\" (UniqueName: \"kubernetes.io/projected/861a8fa4-2b12-475a-819b-74238f4d1a60-kube-api-access-8dxjj\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.508465 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-utilities\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.508539 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-catalog-content\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.509075 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-catalog-content\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.509183 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.009163601 +0000 UTC m=+123.765288627 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.513371 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-utilities\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.555599 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlr9s"] Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.555763 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.570999 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8dxjj\" (UniqueName: \"kubernetes.io/projected/861a8fa4-2b12-475a-819b-74238f4d1a60-kube-api-access-8dxjj\") pod \"redhat-marketplace-v2wfd\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.613458 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.614086 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.114057301 +0000 UTC m=+123.870182327 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.714785 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.715019 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.214963363 +0000 UTC m=+123.971088399 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.715329 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-catalog-content\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.715631 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.715768 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-utilities\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.715827 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hhdwr\" (UniqueName: \"kubernetes.io/projected/86067e48-6b0c-408d-be6c-cd65aff8ef97-kube-api-access-hhdwr\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.716263 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.216252468 +0000 UTC m=+123.972377494 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.790549 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.799590 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.806506 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler\"/\"installer-sa-dockercfg-qpkss\"" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.806613 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler\"/\"kube-root-ca.crt\"" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.816653 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.816833 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.31680046 +0000 UTC m=+124.072925506 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.817212 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-catalog-content\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.817382 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.817471 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-utilities\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.817628 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hhdwr\" (UniqueName: \"kubernetes.io/projected/86067e48-6b0c-408d-be6c-cd65aff8ef97-kube-api-access-hhdwr\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.817783 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-catalog-content\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.817914 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-utilities\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.818047 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.318033534 +0000 UTC m=+124.074158650 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.827363 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.854386 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.904317 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hhdwr\" (UniqueName: \"kubernetes.io/projected/86067e48-6b0c-408d-be6c-cd65aff8ef97-kube-api-access-hhdwr\") pod \"redhat-marketplace-qlr9s\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.919063 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.919231 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.419198003 +0000 UTC m=+124.175323029 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.919744 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.919813 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:11 crc kubenswrapper[5117]: I0123 08:55:11.919944 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:11 crc kubenswrapper[5117]: E0123 08:55:11.920075 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.420059666 +0000 UTC m=+124.176184762 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.021997 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.022322 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.522288544 +0000 UTC m=+124.278413570 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.022624 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.022677 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.022788 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.022959 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kubelet-dir\") pod \"revision-pruner-6-crc\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.023045 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.523034714 +0000 UTC m=+124.279159740 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.055953 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kube-api-access\") pod \"revision-pruner-6-crc\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.097391 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2wfd"] Jan 23 08:55:12 crc kubenswrapper[5117]: W0123 08:55:12.109270 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod861a8fa4_2b12_475a_819b_74238f4d1a60.slice/crio-2a592f32f8f19c63156dd303d4a7553e406ebe27da1b244dea93b167d9812e67 WatchSource:0}: Error finding container 2a592f32f8f19c63156dd303d4a7553e406ebe27da1b244dea93b167d9812e67: Status 404 returned error can't find the container with id 2a592f32f8f19c63156dd303d4a7553e406ebe27da1b244dea93b167d9812e67 Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.114461 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.126069 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.126334 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.626311001 +0000 UTC m=+124.382436027 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.127529 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.128302 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.628289515 +0000 UTC m=+124.384414541 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.204496 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.230038 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.230654 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.730625696 +0000 UTC m=+124.486750722 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.231049 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.231523 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.73151606 +0000 UTC m=+124.487641086 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.248577 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" event={"ID":"08c5724f-8e70-4d84-acf6-3036bcbc7f4a","Type":"ContainerStarted","Data":"6710c3e670ffa8d8cf788c419187464cbb89f59703e07829f7f451d0e64eafa6"} Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.252877 5117 generic.go:358] "Generic (PLEG): container finished" podID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerID="bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52" exitCode=0 Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.253051 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5dxg" event={"ID":"853c0f27-c63d-47ba-844c-e0ae7a71b079","Type":"ContainerDied","Data":"bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52"} Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.260355 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2wfd" event={"ID":"861a8fa4-2b12-475a-819b-74238f4d1a60","Type":"ContainerStarted","Data":"2a592f32f8f19c63156dd303d4a7553e406ebe27da1b244dea93b167d9812e67"} Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.271639 5117 generic.go:358] "Generic (PLEG): container finished" podID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerID="9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1" exitCode=0 Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.272114 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz52g" event={"ID":"dea7fdf2-1eec-4ae5-972b-2dc333b46e24","Type":"ContainerDied","Data":"9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1"} Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.306830 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:12 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:12 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:12 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.306900 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.332745 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.339651 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.839617489 +0000 UTC m=+124.595742595 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.404966 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hmlw5"] Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.415817 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.418170 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmlw5"] Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.418563 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.432978 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-scheduler/revision-pruner-6-crc"] Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.435016 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.435529 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:12.935511583 +0000 UTC m=+124.691636689 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: W0123 08:55:12.461944 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod7f1ebeed_bd48_41b0_9d0f_ef2462236f5d.slice/crio-b7376f7ce9ebdba97a6af6415faa29fe66682819937bb6c4576f54408ac5620e WatchSource:0}: Error finding container b7376f7ce9ebdba97a6af6415faa29fe66682819937bb6c4576f54408ac5620e: Status 404 returned error can't find the container with id b7376f7ce9ebdba97a6af6415faa29fe66682819937bb6c4576f54408ac5620e Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.536760 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.536870 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.036853497 +0000 UTC m=+124.792978523 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.536993 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rngk6\" (UniqueName: \"kubernetes.io/projected/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-kube-api-access-rngk6\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.537030 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-catalog-content\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.537500 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-utilities\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.538002 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.538379 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.038362338 +0000 UTC m=+124.794487364 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.639943 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.640289 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rngk6\" (UniqueName: \"kubernetes.io/projected/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-kube-api-access-rngk6\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.640418 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.140366149 +0000 UTC m=+124.896491175 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.640483 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-catalog-content\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.640662 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-utilities\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.640700 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.641111 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-catalog-content\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.641212 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.141193832 +0000 UTC m=+124.897318858 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.644042 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-utilities\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.660273 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rngk6\" (UniqueName: \"kubernetes.io/projected/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-kube-api-access-rngk6\") pod \"redhat-operators-hmlw5\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.739172 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlr9s"] Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.742936 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.743229 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.743546 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.243517923 +0000 UTC m=+124.999642979 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.743908 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.744286 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.244272893 +0000 UTC m=+125.000397919 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: W0123 08:55:12.756905 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod86067e48_6b0c_408d_be6c_cd65aff8ef97.slice/crio-e93354022bb22bafffe1a831f0fe5ca018c1b1cf0557dea6212f23c700a325ff WatchSource:0}: Error finding container e93354022bb22bafffe1a831f0fe5ca018c1b1cf0557dea6212f23c700a325ff: Status 404 returned error can't find the container with id e93354022bb22bafffe1a831f0fe5ca018c1b1cf0557dea6212f23c700a325ff Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.802654 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-9wt6p"] Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.846873 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.847586 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.34756239 +0000 UTC m=+125.103687416 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.876533 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wt6p"] Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.876735 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.948534 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-utilities\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.948643 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.948674 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-catalog-content\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.948722 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l5l7t\" (UniqueName: \"kubernetes.io/projected/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-kube-api-access-l5l7t\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:12 crc kubenswrapper[5117]: E0123 08:55:12.949066 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.449049678 +0000 UTC m=+125.205174694 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.986514 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:12 crc kubenswrapper[5117]: I0123 08:55:12.986574 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.006466 5117 patch_prober.go:28] interesting pod/console-64d44f6ddf-mkrsr container/console namespace/openshift-console: Startup probe status=failure output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" start-of-body= Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.006538 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-console/console-64d44f6ddf-mkrsr" podUID="bdb7201e-da3c-4c31-9cae-a139067e3a83" containerName="console" probeResult="failure" output="Get \"https://10.217.0.19:8443/health\": dial tcp 10.217.0.19:8443: connect: connection refused" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.010925 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hmlw5"] Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.038773 5117 patch_prober.go:28] interesting pod/downloads-747b44746d-lr6gt container/download-server namespace/openshift-console: Liveness probe status=failure output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" start-of-body= Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.038836 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-console/downloads-747b44746d-lr6gt" podUID="5c20ff45-85a7-456d-90ae-85845c4aec43" containerName="download-server" probeResult="failure" output="Get \"http://10.217.0.18:8080/\": dial tcp 10.217.0.18:8080: connect: connection refused" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.050779 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.050937 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.550913345 +0000 UTC m=+125.307038371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.051059 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.051097 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-catalog-content\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.051199 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l5l7t\" (UniqueName: \"kubernetes.io/projected/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-kube-api-access-l5l7t\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.051385 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-utilities\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.052071 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-utilities\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.052204 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.55218695 +0000 UTC m=+125.308312046 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.052649 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-catalog-content\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: W0123 08:55:13.055330 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod5d31bbe8_9012_4c1b_8f77_3c795f1eef9a.slice/crio-3120479a1d7e691dc210b3ea1fd9624c47a6d18b472792288001c37d706e23c1 WatchSource:0}: Error finding container 3120479a1d7e691dc210b3ea1fd9624c47a6d18b472792288001c37d706e23c1: Status 404 returned error can't find the container with id 3120479a1d7e691dc210b3ea1fd9624c47a6d18b472792288001c37d706e23c1 Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.073852 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l5l7t\" (UniqueName: \"kubernetes.io/projected/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-kube-api-access-l5l7t\") pod \"redhat-operators-9wt6p\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.152520 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.152855 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.652838435 +0000 UTC m=+125.408963461 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.223939 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.254373 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.254774 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.754737354 +0000 UTC m=+125.510862380 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.281697 5117 generic.go:358] "Generic (PLEG): container finished" podID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerID="399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3" exitCode=0 Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.281780 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2wfd" event={"ID":"861a8fa4-2b12-475a-819b-74238f4d1a60","Type":"ContainerDied","Data":"399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3"} Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.287388 5117 generic.go:358] "Generic (PLEG): container finished" podID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerID="296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26" exitCode=0 Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.288251 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlr9s" event={"ID":"86067e48-6b0c-408d-be6c-cd65aff8ef97","Type":"ContainerDied","Data":"296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26"} Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.288317 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlr9s" event={"ID":"86067e48-6b0c-408d-be6c-cd65aff8ef97","Type":"ContainerStarted","Data":"e93354022bb22bafffe1a831f0fe5ca018c1b1cf0557dea6212f23c700a325ff"} Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.292635 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d","Type":"ContainerStarted","Data":"860de0f1c89582d16d0e41a90818bde3eb86a2f9f4dbb4358e7ada3309c0b8a1"} Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.292677 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d","Type":"ContainerStarted","Data":"b7376f7ce9ebdba97a6af6415faa29fe66682819937bb6c4576f54408ac5620e"} Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.299590 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmlw5" event={"ID":"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a","Type":"ContainerStarted","Data":"3120479a1d7e691dc210b3ea1fd9624c47a6d18b472792288001c37d706e23c1"} Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.307180 5117 patch_prober.go:28] interesting pod/router-default-68cf44c8b8-mlq9j container/router namespace/openshift-ingress: Startup probe status=failure output="HTTP probe failed with statuscode: 500" start-of-body=[-]backend-http failed: reason withheld Jan 23 08:55:13 crc kubenswrapper[5117]: [-]has-synced failed: reason withheld Jan 23 08:55:13 crc kubenswrapper[5117]: [+]process-running ok Jan 23 08:55:13 crc kubenswrapper[5117]: healthz check failed Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.307231 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" podUID="bc9ea72b-c9a6-4fbc-873a-133709e45add" containerName="router" probeResult="failure" output="HTTP probe failed with statuscode: 500" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.357890 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.358595 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.858573666 +0000 UTC m=+125.614698692 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.358705 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.359639 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.859631444 +0000 UTC m=+125.615756470 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.359948 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-scheduler/revision-pruner-6-crc" podStartSLOduration=2.354119314 podStartE2EDuration="2.354119314s" podCreationTimestamp="2026-01-23 08:55:11 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:13.348813138 +0000 UTC m=+125.104938164" watchObservedRunningTime="2026-01-23 08:55:13.354119314 +0000 UTC m=+125.110244340" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.459584 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.459812 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.959778345 +0000 UTC m=+125.715903371 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.460077 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.460417 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:13.960404823 +0000 UTC m=+125.716529849 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.528286 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-9wt6p"] Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.563980 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.564111 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.06408484 +0000 UTC m=+125.820209866 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.564208 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.564735 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.064725168 +0000 UTC m=+125.820850194 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.665349 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.665872 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.165852204 +0000 UTC m=+125.921977230 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.766986 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.767384 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.267365882 +0000 UTC m=+126.023490908 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.807262 5117 ???:1] "http: TLS handshake error from 192.168.126.11:40344: no serving certificate available for the kubelet" Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.868095 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.868373 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.368342446 +0000 UTC m=+126.124467482 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.868525 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.869183 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.369164748 +0000 UTC m=+126.125289774 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:13 crc kubenswrapper[5117]: I0123 08:55:13.970521 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:13 crc kubenswrapper[5117]: E0123 08:55:13.970864 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.470843491 +0000 UTC m=+126.226968517 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.072694 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.073236 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.573208353 +0000 UTC m=+126.329333379 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.173588 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.174247 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.674229597 +0000 UTC m=+126.430354623 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.275382 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.275751 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.775734525 +0000 UTC m=+126.531859551 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.301190 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.306201 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.306739 5117 generic.go:358] "Generic (PLEG): container finished" podID="7f1ebeed-bd48-41b0-9d0f-ef2462236f5d" containerID="860de0f1c89582d16d0e41a90818bde3eb86a2f9f4dbb4358e7ada3309c0b8a1" exitCode=0 Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.306854 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d","Type":"ContainerDied","Data":"860de0f1c89582d16d0e41a90818bde3eb86a2f9f4dbb4358e7ada3309c0b8a1"} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.311961 5117 generic.go:358] "Generic (PLEG): container finished" podID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerID="ba2e65e69c12425d479fef27af04ca01d5dc72c3c607309a583589a5fc986f3c" exitCode=0 Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.312064 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wt6p" event={"ID":"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a","Type":"ContainerDied","Data":"ba2e65e69c12425d479fef27af04ca01d5dc72c3c607309a583589a5fc986f3c"} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.312087 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wt6p" event={"ID":"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a","Type":"ContainerStarted","Data":"5c15c809bb925793321e61a2a1e7c1cedefb0d225157f7fdbe5ff8eb865c5fe3"} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.313911 5117 generic.go:358] "Generic (PLEG): container finished" podID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerID="0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe" exitCode=0 Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.313964 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmlw5" event={"ID":"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a","Type":"ContainerDied","Data":"0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe"} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.317601 5117 generic.go:358] "Generic (PLEG): container finished" podID="f2500020-ee51-4792-a6d0-ca4c0f0fdec4" containerID="0f21a485b03f8eda6772faca425ad22d93102675a3c10c63223bd2cd5eac91c5" exitCode=0 Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.317685 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" event={"ID":"f2500020-ee51-4792-a6d0-ca4c0f0fdec4","Type":"ContainerDied","Data":"0f21a485b03f8eda6772faca425ad22d93102675a3c10c63223bd2cd5eac91c5"} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.323006 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" event={"ID":"08c5724f-8e70-4d84-acf6-3036bcbc7f4a","Type":"ContainerStarted","Data":"6ee213406c83633f01f6dd1eaea760392e7a3925f7c01c13573a4f08bee1657a"} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.376905 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.377458 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.877407348 +0000 UTC m=+126.633532374 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.377927 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.378618 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.878604541 +0000 UTC m=+126.634729747 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.396451 5117 plugin_watcher.go:194] "Adding socket path or updating timestamp to desired state cache" path="/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock" Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.479256 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.479468 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.97943918 +0000 UTC m=+126.735564216 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.479626 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.479939 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:14.979925244 +0000 UTC m=+126.736050270 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.581143 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.581556 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.081499574 +0000 UTC m=+126.837624600 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.582313 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.582668 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.082645385 +0000 UTC m=+126.838770401 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.685643 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.685852 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.185817659 +0000 UTC m=+126.941942685 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.685980 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.686631 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.18659244 +0000 UTC m=+126.942717656 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.791923 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.792948 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.292897829 +0000 UTC m=+127.049022855 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.794444 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.796052 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.296030835 +0000 UTC m=+127.052155861 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.896083 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.896287 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName:9e9b5059-1b3e-4067-a63d-2952cbe863af nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.396252758 +0000 UTC m=+127.152377794 (durationBeforeRetry 500ms). Error: UnmountVolume.TearDown failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af") : kubernetes.io/csi: Unmounter.TearDownAt failed to get CSI client: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.896698 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:14 crc kubenswrapper[5117]: E0123 08:55:14.897121 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2 podName: nodeName:}" failed. No retries permitted until 2026-01-23 08:55:15.397104401 +0000 UTC m=+127.153229507 (durationBeforeRetry 500ms). Error: MountVolume.MountDevice failed for volume "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (UniqueName: "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "image-registry-66587d64c8-nsh7s" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57") : kubernetes.io/csi: attacher.MountDevice failed to create newCsiDriverClient: driver name kubevirt.io.hostpath-provisioner not found in the list of registered CSI drivers Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.931125 5117 reconciler.go:161] "OperationExecutor.RegisterPlugin started" plugin={"SocketPath":"/var/lib/kubelet/plugins_registry/kubevirt.io.hostpath-provisioner-reg.sock","Timestamp":"2026-01-23T08:55:14.396509001Z","UUID":"16cc11b1-fe0a-410a-8b40-76cb6460fb4d","Handler":null,"Name":"","Endpoint":""} Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.939082 5117 csi_plugin.go:106] kubernetes.io/csi: Trying to validate a new CSI Driver with name: kubevirt.io.hostpath-provisioner endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock versions: 1.0.0 Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.939151 5117 csi_plugin.go:119] kubernetes.io/csi: Register new plugin with name: kubevirt.io.hostpath-provisioner at endpoint: /var/lib/kubelet/plugins/csi-hostpath/csi.sock Jan 23 08:55:14 crc kubenswrapper[5117]: I0123 08:55:14.998392 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"9e9b5059-1b3e-4067-a63d-2952cbe863af\" (UID: \"9e9b5059-1b3e-4067-a63d-2952cbe863af\") " Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.003827 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2") pod "9e9b5059-1b3e-4067-a63d-2952cbe863af" (UID: "9e9b5059-1b3e-4067-a63d-2952cbe863af"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.100386 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.103176 5117 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.103212 5117 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/b1264ac67579ad07e7e9003054d44fe40dd55285a4b2f7dc74e48be1aee0868a/globalmount\"" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.156645 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-66587d64c8-nsh7s\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.232409 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.238765 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.336702 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" event={"ID":"08c5724f-8e70-4d84-acf6-3036bcbc7f4a","Type":"ContainerStarted","Data":"5e4d5f129d1945387013d9a62f7b5969f0493cf7e1c672fab451d3e490e08e49"} Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.336744 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" event={"ID":"08c5724f-8e70-4d84-acf6-3036bcbc7f4a","Type":"ContainerStarted","Data":"bacf93bbabd99f1ae40b15e215431380a67be23ea1da010e5afe90b1356b8921"} Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.344947 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ingress/router-default-68cf44c8b8-mlq9j" Jan 23 08:55:15 crc kubenswrapper[5117]: I0123 08:55:15.358192 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="hostpath-provisioner/csi-hostpathplugin-p5dvt" podStartSLOduration=16.35817538 podStartE2EDuration="16.35817538s" podCreationTimestamp="2026-01-23 08:54:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:15.357653746 +0000 UTC m=+127.113778792" watchObservedRunningTime="2026-01-23 08:55:15.35817538 +0000 UTC m=+127.114300406" Jan 23 08:55:16 crc kubenswrapper[5117]: E0123 08:55:16.383702 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:16 crc kubenswrapper[5117]: E0123 08:55:16.385901 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:16 crc kubenswrapper[5117]: E0123 08:55:16.387784 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:16 crc kubenswrapper[5117]: E0123 08:55:16.387915 5117 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.631870 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.631937 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.631959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.631990 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.633821 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.633839 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.634076 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.644989 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.650355 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"networking-console-plugin-cert\" (UniqueName: \"kubernetes.io/secret/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-networking-console-plugin-cert\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.656392 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gwt8b\" (UniqueName: \"kubernetes.io/projected/17b87002-b798-480a-8e17-83053d698239-kube-api-access-gwt8b\") pod \"network-check-target-fhkjl\" (UID: \"17b87002-b798-480a-8e17-83053d698239\") " pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.657214 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l7w75\" (UniqueName: \"kubernetes.io/projected/f863fff9-286a-45fa-b8f0-8a86994b8440-kube-api-access-l7w75\") pod \"network-check-source-5bb8f5cd97-xdvz5\" (UID: \"f863fff9-286a-45fa-b8f0-8a86994b8440\") " pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.778933 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9e9b5059-1b3e-4067-a63d-2952cbe863af" path="/var/lib/kubelet/pods/9e9b5059-1b3e-4067-a63d-2952cbe863af/volumes" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.846707 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" Jan 23 08:55:16 crc kubenswrapper[5117]: I0123 08:55:16.853344 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:55:17 crc kubenswrapper[5117]: I0123 08:55:17.142281 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 23 08:55:17 crc kubenswrapper[5117]: I0123 08:55:17.307084 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"nginx-conf\" (UniqueName: \"kubernetes.io/configmap/6a9ae5f6-97bd-46ac-bafa-ca1b4452a141-nginx-conf\") pod \"networking-console-plugin-5ff7774fd9-nljh6\" (UID: \"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141\") " pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:17 crc kubenswrapper[5117]: I0123 08:55:17.420619 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.345303 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.345667 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-dns/dns-default-mlfmh" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.345696 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/downloads-747b44746d-lr6gt" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.346763 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.350570 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.350632 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.478360 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/793bc85d-2687-46b1-9353-720b14ebd81b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.478937 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/793bc85d-2687-46b1-9353-720b14ebd81b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.580532 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/793bc85d-2687-46b1-9353-720b14ebd81b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.580801 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/793bc85d-2687-46b1-9353-720b14ebd81b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.580635 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/793bc85d-2687-46b1-9353-720b14ebd81b-kubelet-dir\") pod \"revision-pruner-11-crc\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.601476 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/793bc85d-2687-46b1-9353-720b14ebd81b-kube-api-access\") pod \"revision-pruner-11-crc\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:19 crc kubenswrapper[5117]: I0123 08:55:19.667953 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:20 crc kubenswrapper[5117]: I0123 08:55:20.213980 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:55:20 crc kubenswrapper[5117]: I0123 08:55:20.215244 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.301434 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.387903 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" event={"ID":"f2500020-ee51-4792-a6d0-ca4c0f0fdec4","Type":"ContainerDied","Data":"78b31edebaa08d7736d66bee31568d1b91cab2fee33aa11a1774cf0cc0153091"} Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.387941 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78b31edebaa08d7736d66bee31568d1b91cab2fee33aa11a1774cf0cc0153091" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.388058 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.402933 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-config-volume\") pod \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.403116 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-secret-volume\") pod \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.403185 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mxk74\" (UniqueName: \"kubernetes.io/projected/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-kube-api-access-mxk74\") pod \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\" (UID: \"f2500020-ee51-4792-a6d0-ca4c0f0fdec4\") " Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.404534 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-config-volume" (OuterVolumeSpecName: "config-volume") pod "f2500020-ee51-4792-a6d0-ca4c0f0fdec4" (UID: "f2500020-ee51-4792-a6d0-ca4c0f0fdec4"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.408374 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-kube-api-access-mxk74" (OuterVolumeSpecName: "kube-api-access-mxk74") pod "f2500020-ee51-4792-a6d0-ca4c0f0fdec4" (UID: "f2500020-ee51-4792-a6d0-ca4c0f0fdec4"). InnerVolumeSpecName "kube-api-access-mxk74". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.409155 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "f2500020-ee51-4792-a6d0-ca4c0f0fdec4" (UID: "f2500020-ee51-4792-a6d0-ca4c0f0fdec4"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.504982 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.505022 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mxk74\" (UniqueName: \"kubernetes.io/projected/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-kube-api-access-mxk74\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:21 crc kubenswrapper[5117]: I0123 08:55:21.505031 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/f2500020-ee51-4792-a6d0-ca4c0f0fdec4-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:22 crc kubenswrapper[5117]: I0123 08:55:22.990926 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:22 crc kubenswrapper[5117]: I0123 08:55:22.997166 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-console/console-64d44f6ddf-mkrsr" Jan 23 08:55:24 crc kubenswrapper[5117]: I0123 08:55:24.071195 5117 ???:1] "http: TLS handshake error from 192.168.126.11:46652: no serving certificate available for the kubelet" Jan 23 08:55:25 crc kubenswrapper[5117]: I0123 08:55:25.339692 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 08:55:25 crc kubenswrapper[5117]: I0123 08:55:25.943550 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.064423 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kube-api-access\") pod \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.064737 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kubelet-dir\") pod \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\" (UID: \"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d\") " Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.064832 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "7f1ebeed-bd48-41b0-9d0f-ef2462236f5d" (UID: "7f1ebeed-bd48-41b0-9d0f-ef2462236f5d"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.065329 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.071269 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "7f1ebeed-bd48-41b0-9d0f-ef2462236f5d" (UID: "7f1ebeed-bd48-41b0-9d0f-ef2462236f5d"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.166881 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/7f1ebeed-bd48-41b0-9d0f-ef2462236f5d-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:26 crc kubenswrapper[5117]: E0123 08:55:26.380821 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:26 crc kubenswrapper[5117]: E0123 08:55:26.382818 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:26 crc kubenswrapper[5117]: E0123 08:55:26.384207 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:26 crc kubenswrapper[5117]: E0123 08:55:26.384260 5117 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.412731 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-scheduler/revision-pruner-6-crc" event={"ID":"7f1ebeed-bd48-41b0-9d0f-ef2462236f5d","Type":"ContainerDied","Data":"b7376f7ce9ebdba97a6af6415faa29fe66682819937bb6c4576f54408ac5620e"} Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.412770 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b7376f7ce9ebdba97a6af6415faa29fe66682819937bb6c4576f54408ac5620e" Jan 23 08:55:26 crc kubenswrapper[5117]: I0123 08:55:26.412740 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-scheduler/revision-pruner-6-crc" Jan 23 08:55:36 crc kubenswrapper[5117]: E0123 08:55:36.381310 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:36 crc kubenswrapper[5117]: E0123 08:55:36.383595 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:36 crc kubenswrapper[5117]: E0123 08:55:36.385013 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" cmd=["/bin/bash","-c","test -f /ready/ready"] Jan 23 08:55:36 crc kubenswrapper[5117]: E0123 08:55:36.385064 5117 prober.go:104] "Probe errored" err="rpc error: code = Unknown desc = command error: cannot register an exec PID: container is stopping, stdout: , stderr: , exit code -1" probeType="Readiness" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" probeResult="unknown" Jan 23 08:55:40 crc kubenswrapper[5117]: I0123 08:55:40.218008 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" Jan 23 08:55:41 crc kubenswrapper[5117]: I0123 08:55:41.489641 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-8vvn5_c3884778-855d-4be3-aeab-4a9552ec10ac/kube-multus-additional-cni-plugins/0.log" Jan 23 08:55:41 crc kubenswrapper[5117]: I0123 08:55:41.489933 5117 generic.go:358] "Generic (PLEG): container finished" podID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" exitCode=137 Jan 23 08:55:41 crc kubenswrapper[5117]: I0123 08:55:41.490030 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" event={"ID":"c3884778-855d-4be3-aeab-4a9552ec10ac","Type":"ContainerDied","Data":"caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab"} Jan 23 08:55:44 crc kubenswrapper[5117]: I0123 08:55:44.573458 5117 ???:1] "http: TLS handshake error from 192.168.126.11:50322: no serving certificate available for the kubelet" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.349089 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-8vvn5_c3884778-855d-4be3-aeab-4a9552ec10ac/kube-multus-additional-cni-plugins/0.log" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.349968 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.463556 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3884778-855d-4be3-aeab-4a9552ec10ac-cni-sysctl-allowlist\") pod \"c3884778-855d-4be3-aeab-4a9552ec10ac\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.463661 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c3884778-855d-4be3-aeab-4a9552ec10ac-ready\") pod \"c3884778-855d-4be3-aeab-4a9552ec10ac\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.463764 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-b7pw6\" (UniqueName: \"kubernetes.io/projected/c3884778-855d-4be3-aeab-4a9552ec10ac-kube-api-access-b7pw6\") pod \"c3884778-855d-4be3-aeab-4a9552ec10ac\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.463794 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3884778-855d-4be3-aeab-4a9552ec10ac-tuning-conf-dir\") pod \"c3884778-855d-4be3-aeab-4a9552ec10ac\" (UID: \"c3884778-855d-4be3-aeab-4a9552ec10ac\") " Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.464114 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/c3884778-855d-4be3-aeab-4a9552ec10ac-tuning-conf-dir" (OuterVolumeSpecName: "tuning-conf-dir") pod "c3884778-855d-4be3-aeab-4a9552ec10ac" (UID: "c3884778-855d-4be3-aeab-4a9552ec10ac"). InnerVolumeSpecName "tuning-conf-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.465101 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c3884778-855d-4be3-aeab-4a9552ec10ac-cni-sysctl-allowlist" (OuterVolumeSpecName: "cni-sysctl-allowlist") pod "c3884778-855d-4be3-aeab-4a9552ec10ac" (UID: "c3884778-855d-4be3-aeab-4a9552ec10ac"). InnerVolumeSpecName "cni-sysctl-allowlist". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.465201 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c3884778-855d-4be3-aeab-4a9552ec10ac-ready" (OuterVolumeSpecName: "ready") pod "c3884778-855d-4be3-aeab-4a9552ec10ac" (UID: "c3884778-855d-4be3-aeab-4a9552ec10ac"). InnerVolumeSpecName "ready". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.483773 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c3884778-855d-4be3-aeab-4a9552ec10ac-kube-api-access-b7pw6" (OuterVolumeSpecName: "kube-api-access-b7pw6") pod "c3884778-855d-4be3-aeab-4a9552ec10ac" (UID: "c3884778-855d-4be3-aeab-4a9552ec10ac"). InnerVolumeSpecName "kube-api-access-b7pw6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.534818 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_cni-sysctl-allowlist-ds-8vvn5_c3884778-855d-4be3-aeab-4a9552ec10ac/kube-multus-additional-cni-plugins/0.log" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.535339 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.535616 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/cni-sysctl-allowlist-ds-8vvn5" event={"ID":"c3884778-855d-4be3-aeab-4a9552ec10ac","Type":"ContainerDied","Data":"e125e9a3e6df5ba65342b918af8fc5fcb0081e4a01e6124d829a8cb3ad03e92c"} Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.535690 5117 scope.go:117] "RemoveContainer" containerID="caf591c6dc5be2791016d01572bbc747ca76f4f4b4a08cc054c7ec18a3117dab" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.569395 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-11-crc"] Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.572437 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-b7pw6\" (UniqueName: \"kubernetes.io/projected/c3884778-855d-4be3-aeab-4a9552ec10ac-kube-api-access-b7pw6\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.572468 5117 reconciler_common.go:299] "Volume detached for volume \"tuning-conf-dir\" (UniqueName: \"kubernetes.io/host-path/c3884778-855d-4be3-aeab-4a9552ec10ac-tuning-conf-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.572478 5117 reconciler_common.go:299] "Volume detached for volume \"cni-sysctl-allowlist\" (UniqueName: \"kubernetes.io/configmap/c3884778-855d-4be3-aeab-4a9552ec10ac-cni-sysctl-allowlist\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.572490 5117 reconciler_common.go:299] "Volume detached for volume \"ready\" (UniqueName: \"kubernetes.io/empty-dir/c3884778-855d-4be3-aeab-4a9552ec10ac-ready\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.626828 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-8vvn5"] Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.628562 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-multus/cni-sysctl-allowlist-ds-8vvn5"] Jan 23 08:55:45 crc kubenswrapper[5117]: I0123 08:55:45.870681 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nsh7s"] Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.545821 5117 generic.go:358] "Generic (PLEG): container finished" podID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerID="d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.545921 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2wfd" event={"ID":"861a8fa4-2b12-475a-819b-74238f4d1a60","Type":"ContainerDied","Data":"d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.551408 5117 generic.go:358] "Generic (PLEG): container finished" podID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerID="9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.551485 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlr9s" event={"ID":"86067e48-6b0c-408d-be6c-cd65aff8ef97","Type":"ContainerDied","Data":"9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.556879 5117 generic.go:358] "Generic (PLEG): container finished" podID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerID="8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.556995 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz52g" event={"ID":"dea7fdf2-1eec-4ae5-972b-2dc333b46e24","Type":"ContainerDied","Data":"8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.558104 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"63e8997a2af7ed748325c49aa6bf1c59bf2325639fb7de72ffe0837cdd5cc75b"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.558145 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-console/networking-console-plugin-5ff7774fd9-nljh6" event={"ID":"6a9ae5f6-97bd-46ac-bafa-ca1b4452a141","Type":"ContainerStarted","Data":"507841194e604d6e159934b91cdbdbed566bc7b4b8ec8094f3dd556327525df5"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.559850 5117 generic.go:358] "Generic (PLEG): container finished" podID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerID="2db8fae332e6cd7a8db783c4d65cb97a27f2183176cff56e451fa26e5e26382f" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.559888 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wt6p" event={"ID":"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a","Type":"ContainerDied","Data":"2db8fae332e6cd7a8db783c4d65cb97a27f2183176cff56e451fa26e5e26382f"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.562747 5117 generic.go:358] "Generic (PLEG): container finished" podID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerID="abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.562834 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmlw5" event={"ID":"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a","Type":"ContainerDied","Data":"abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.566812 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"793bc85d-2687-46b1-9353-720b14ebd81b","Type":"ContainerStarted","Data":"005f07b99be976cc39834311d289921aa9eef25430162813301e51e4c1fa4c45"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.566847 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"793bc85d-2687-46b1-9353-720b14ebd81b","Type":"ContainerStarted","Data":"edf4a148b87af93cca1f87cd188210021947953a37af39aa9904cd5c0ee45555"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.574391 5117 generic.go:358] "Generic (PLEG): container finished" podID="32b82517-9793-44f5-bc31-05ba0d27c553" containerID="b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.574669 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qf4s" event={"ID":"32b82517-9793-44f5-bc31-05ba0d27c553","Type":"ContainerDied","Data":"b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.577314 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"687385d7e7d8cf4251b887aceddb82c913cb6e9c5a39864eeb0809ceb9e9b458"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.577349 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-source-5bb8f5cd97-xdvz5" event={"ID":"f863fff9-286a-45fa-b8f0-8a86994b8440","Type":"ContainerStarted","Data":"114d48774c63fd515e515deac14623cb446fed40f5b3d6f0c39fa76e41eb8780"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.579554 5117 generic.go:358] "Generic (PLEG): container finished" podID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerID="ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.579677 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5dxg" event={"ID":"853c0f27-c63d-47ba-844c-e0ae7a71b079","Type":"ContainerDied","Data":"ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.582566 5117 generic.go:358] "Generic (PLEG): container finished" podID="aace2d11-7f7c-464c-b258-c61edb938e83" containerID="f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0" exitCode=0 Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.582637 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgm87" event={"ID":"aace2d11-7f7c-464c-b258-c61edb938e83","Type":"ContainerDied","Data":"f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.587596 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"e2bb6533d03d22f5e2249dd51913efc75ef4bbcf33e64e39af30123171a2941b"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.587629 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-network-diagnostics/network-check-target-fhkjl" event={"ID":"17b87002-b798-480a-8e17-83053d698239","Type":"ContainerStarted","Data":"0a6965707cd49434f32c32152cdee40a75058080cdbc397ea6bc1b6450b93351"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.591209 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.620947 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" event={"ID":"3668af23-b087-479a-b9d8-d6e8b963ce57","Type":"ContainerStarted","Data":"55bd5d9f253c764c9de68c6b537afb1198b3effe91f5a7d83788f444010a799b"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.620995 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" event={"ID":"3668af23-b087-479a-b9d8-d6e8b963ce57","Type":"ContainerStarted","Data":"bfe401a096a959c41eda2119b14b3face2ee34802d4c99e4b0848a7cb2580fba"} Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.631025 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.684069 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" podStartSLOduration=137.684050246 podStartE2EDuration="2m17.684050246s" podCreationTimestamp="2026-01-23 08:53:29 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:46.681088607 +0000 UTC m=+158.437213643" watchObservedRunningTime="2026-01-23 08:55:46.684050246 +0000 UTC m=+158.440175272" Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.746735 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/revision-pruner-11-crc" podStartSLOduration=29.746710774 podStartE2EDuration="29.746710774s" podCreationTimestamp="2026-01-23 08:55:17 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:55:46.745421205 +0000 UTC m=+158.501546231" watchObservedRunningTime="2026-01-23 08:55:46.746710774 +0000 UTC m=+158.502835830" Jan 23 08:55:46 crc kubenswrapper[5117]: I0123 08:55:46.802958 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" path="/var/lib/kubelet/pods/c3884778-855d-4be3-aeab-4a9552ec10ac/volumes" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.628807 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qf4s" event={"ID":"32b82517-9793-44f5-bc31-05ba0d27c553","Type":"ContainerStarted","Data":"b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.634883 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5dxg" event={"ID":"853c0f27-c63d-47ba-844c-e0ae7a71b079","Type":"ContainerStarted","Data":"c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.638923 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgm87" event={"ID":"aace2d11-7f7c-464c-b258-c61edb938e83","Type":"ContainerStarted","Data":"6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.641265 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2wfd" event={"ID":"861a8fa4-2b12-475a-819b-74238f4d1a60","Type":"ContainerStarted","Data":"00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.643281 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlr9s" event={"ID":"86067e48-6b0c-408d-be6c-cd65aff8ef97","Type":"ContainerStarted","Data":"276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.645160 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz52g" event={"ID":"dea7fdf2-1eec-4ae5-972b-2dc333b46e24","Type":"ContainerStarted","Data":"95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.646786 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wt6p" event={"ID":"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a","Type":"ContainerStarted","Data":"477da4f4cc439582026cad6360b67b5eeca0acf66606489dd6bc715616ee12e5"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.650357 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmlw5" event={"ID":"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a","Type":"ContainerStarted","Data":"2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.651864 5117 generic.go:358] "Generic (PLEG): container finished" podID="793bc85d-2687-46b1-9353-720b14ebd81b" containerID="005f07b99be976cc39834311d289921aa9eef25430162813301e51e4c1fa4c45" exitCode=0 Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.651933 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"793bc85d-2687-46b1-9353-720b14ebd81b","Type":"ContainerDied","Data":"005f07b99be976cc39834311d289921aa9eef25430162813301e51e4c1fa4c45"} Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.664651 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-7qf4s" podStartSLOduration=4.521798544 podStartE2EDuration="38.66463153s" podCreationTimestamp="2026-01-23 08:55:09 +0000 UTC" firstStartedPulling="2026-01-23 08:55:11.228059447 +0000 UTC m=+122.984184473" lastFinishedPulling="2026-01-23 08:55:45.370892443 +0000 UTC m=+157.127017459" observedRunningTime="2026-01-23 08:55:47.663503236 +0000 UTC m=+159.419628272" watchObservedRunningTime="2026-01-23 08:55:47.66463153 +0000 UTC m=+159.420756556" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.707620 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-v2wfd" podStartSLOduration=5.678400312 podStartE2EDuration="37.707602428s" podCreationTimestamp="2026-01-23 08:55:10 +0000 UTC" firstStartedPulling="2026-01-23 08:55:13.282454732 +0000 UTC m=+125.038579758" lastFinishedPulling="2026-01-23 08:55:45.311656848 +0000 UTC m=+157.067781874" observedRunningTime="2026-01-23 08:55:47.705160104 +0000 UTC m=+159.461285130" watchObservedRunningTime="2026-01-23 08:55:47.707602428 +0000 UTC m=+159.463727454" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.707928 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-hmlw5" podStartSLOduration=4.675153448 podStartE2EDuration="35.707923097s" podCreationTimestamp="2026-01-23 08:55:12 +0000 UTC" firstStartedPulling="2026-01-23 08:55:14.314665401 +0000 UTC m=+126.070790417" lastFinishedPulling="2026-01-23 08:55:45.34743504 +0000 UTC m=+157.103560066" observedRunningTime="2026-01-23 08:55:47.683803664 +0000 UTC m=+159.439928680" watchObservedRunningTime="2026-01-23 08:55:47.707923097 +0000 UTC m=+159.464048133" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.748738 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-k5dxg" podStartSLOduration=5.693164745 podStartE2EDuration="38.74871969s" podCreationTimestamp="2026-01-23 08:55:09 +0000 UTC" firstStartedPulling="2026-01-23 08:55:12.256315149 +0000 UTC m=+124.012440175" lastFinishedPulling="2026-01-23 08:55:45.311870094 +0000 UTC m=+157.067995120" observedRunningTime="2026-01-23 08:55:47.728579166 +0000 UTC m=+159.484704212" watchObservedRunningTime="2026-01-23 08:55:47.74871969 +0000 UTC m=+159.504844716" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.766529 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cz52g" podStartSLOduration=5.6994383509999995 podStartE2EDuration="38.766511123s" podCreationTimestamp="2026-01-23 08:55:09 +0000 UTC" firstStartedPulling="2026-01-23 08:55:12.27610012 +0000 UTC m=+124.032225146" lastFinishedPulling="2026-01-23 08:55:45.343172892 +0000 UTC m=+157.099297918" observedRunningTime="2026-01-23 08:55:47.764675979 +0000 UTC m=+159.520801005" watchObservedRunningTime="2026-01-23 08:55:47.766511123 +0000 UTC m=+159.522636149" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.766843 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-9wt6p" podStartSLOduration=4.671708139 podStartE2EDuration="35.766837873s" podCreationTimestamp="2026-01-23 08:55:12 +0000 UTC" firstStartedPulling="2026-01-23 08:55:14.313092978 +0000 UTC m=+126.069218004" lastFinishedPulling="2026-01-23 08:55:45.408222712 +0000 UTC m=+157.164347738" observedRunningTime="2026-01-23 08:55:47.749064141 +0000 UTC m=+159.505189177" watchObservedRunningTime="2026-01-23 08:55:47.766837873 +0000 UTC m=+159.522962889" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.789348 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-cgm87" podStartSLOduration=4.6833509840000005 podStartE2EDuration="38.789328448s" podCreationTimestamp="2026-01-23 08:55:09 +0000 UTC" firstStartedPulling="2026-01-23 08:55:11.229169578 +0000 UTC m=+122.985294604" lastFinishedPulling="2026-01-23 08:55:45.335147042 +0000 UTC m=+157.091272068" observedRunningTime="2026-01-23 08:55:47.78539435 +0000 UTC m=+159.541519376" watchObservedRunningTime="2026-01-23 08:55:47.789328448 +0000 UTC m=+159.545453474" Jan 23 08:55:47 crc kubenswrapper[5117]: I0123 08:55:47.825824 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-qlr9s" podStartSLOduration=4.759756646 podStartE2EDuration="36.825801441s" podCreationTimestamp="2026-01-23 08:55:11 +0000 UTC" firstStartedPulling="2026-01-23 08:55:13.288459417 +0000 UTC m=+125.044584443" lastFinishedPulling="2026-01-23 08:55:45.354504202 +0000 UTC m=+157.110629238" observedRunningTime="2026-01-23 08:55:47.825424919 +0000 UTC m=+159.581549945" watchObservedRunningTime="2026-01-23 08:55:47.825801441 +0000 UTC m=+159.581926467" Jan 23 08:55:48 crc kubenswrapper[5117]: I0123 08:55:48.924702 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.027954 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/793bc85d-2687-46b1-9353-720b14ebd81b-kube-api-access\") pod \"793bc85d-2687-46b1-9353-720b14ebd81b\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.028096 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/793bc85d-2687-46b1-9353-720b14ebd81b-kubelet-dir\") pod \"793bc85d-2687-46b1-9353-720b14ebd81b\" (UID: \"793bc85d-2687-46b1-9353-720b14ebd81b\") " Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.028217 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/793bc85d-2687-46b1-9353-720b14ebd81b-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "793bc85d-2687-46b1-9353-720b14ebd81b" (UID: "793bc85d-2687-46b1-9353-720b14ebd81b"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.028412 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/793bc85d-2687-46b1-9353-720b14ebd81b-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.037371 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/793bc85d-2687-46b1-9353-720b14ebd81b-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "793bc85d-2687-46b1-9353-720b14ebd81b" (UID: "793bc85d-2687-46b1-9353-720b14ebd81b"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.129756 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/793bc85d-2687-46b1-9353-720b14ebd81b-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.427994 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.428055 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.491794 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.596913 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.597104 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.668641 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-11-crc" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.668701 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-11-crc" event={"ID":"793bc85d-2687-46b1-9353-720b14ebd81b","Type":"ContainerDied","Data":"edf4a148b87af93cca1f87cd188210021947953a37af39aa9904cd5c0ee45555"} Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.668749 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="edf4a148b87af93cca1f87cd188210021947953a37af39aa9904cd5c0ee45555" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.838703 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.838769 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.873430 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.996030 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:49 crc kubenswrapper[5117]: I0123 08:55:49.996099 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:50 crc kubenswrapper[5117]: I0123 08:55:50.050023 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:55:50 crc kubenswrapper[5117]: I0123 08:55:50.635165 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/community-operators-7qf4s" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="registry-server" probeResult="failure" output=< Jan 23 08:55:50 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 23 08:55:50 crc kubenswrapper[5117]: > Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135103 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135729 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f2500020-ee51-4792-a6d0-ca4c0f0fdec4" containerName="collect-profiles" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135753 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="f2500020-ee51-4792-a6d0-ca4c0f0fdec4" containerName="collect-profiles" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135815 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="793bc85d-2687-46b1-9353-720b14ebd81b" containerName="pruner" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135824 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="793bc85d-2687-46b1-9353-720b14ebd81b" containerName="pruner" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135847 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7f1ebeed-bd48-41b0-9d0f-ef2462236f5d" containerName="pruner" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135855 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7f1ebeed-bd48-41b0-9d0f-ef2462236f5d" containerName="pruner" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135869 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135878 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135972 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="793bc85d-2687-46b1-9353-720b14ebd81b" containerName="pruner" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135988 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="f2500020-ee51-4792-a6d0-ca4c0f0fdec4" containerName="collect-profiles" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.135997 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="c3884778-855d-4be3-aeab-4a9552ec10ac" containerName="kube-multus-additional-cni-plugins" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.136006 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7f1ebeed-bd48-41b0-9d0f-ef2462236f5d" containerName="pruner" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.719549 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.719878 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.721879 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.722516 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.855365 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.855406 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.864948 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504d8572-4511-482a-a68e-5dba24831f31-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.865048 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504d8572-4511-482a-a68e-5dba24831f31-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.897774 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.965858 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504d8572-4511-482a-a68e-5dba24831f31-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.965939 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504d8572-4511-482a-a68e-5dba24831f31-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.966039 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504d8572-4511-482a-a68e-5dba24831f31-kubelet-dir\") pod \"revision-pruner-12-crc\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:51 crc kubenswrapper[5117]: I0123 08:55:51.984869 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504d8572-4511-482a-a68e-5dba24831f31-kube-api-access\") pod \"revision-pruner-12-crc\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.035779 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.204799 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.206185 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.264494 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.321818 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/revision-pruner-12-crc"] Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.685436 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"504d8572-4511-482a-a68e-5dba24831f31","Type":"ContainerStarted","Data":"78cefa90dacfa15d0051112bc0a6e109d9dadb545265032b5ebed3a125639bd7"} Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.744180 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:52 crc kubenswrapper[5117]: I0123 08:55:52.744256 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.226392 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.226737 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.267624 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.280508 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.282557 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.723032 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:55:53 crc kubenswrapper[5117]: I0123 08:55:53.782664 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-hmlw5" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="registry-server" probeResult="failure" output=< Jan 23 08:55:53 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 23 08:55:53 crc kubenswrapper[5117]: > Jan 23 08:55:54 crc kubenswrapper[5117]: I0123 08:55:54.979685 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlr9s"] Jan 23 08:55:55 crc kubenswrapper[5117]: I0123 08:55:55.711191 5117 generic.go:358] "Generic (PLEG): container finished" podID="504d8572-4511-482a-a68e-5dba24831f31" containerID="d7a3bdfb3af222fde31b9b341fb07db1988d08b05a27299972e90d6fa9d83e97" exitCode=0 Jan 23 08:55:55 crc kubenswrapper[5117]: I0123 08:55:55.711292 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"504d8572-4511-482a-a68e-5dba24831f31","Type":"ContainerDied","Data":"d7a3bdfb3af222fde31b9b341fb07db1988d08b05a27299972e90d6fa9d83e97"} Jan 23 08:55:55 crc kubenswrapper[5117]: I0123 08:55:55.712485 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-qlr9s" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="registry-server" containerID="cri-o://276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6" gracePeriod=2 Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.576576 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.719498 5117 generic.go:358] "Generic (PLEG): container finished" podID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerID="276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6" exitCode=0 Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.719914 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-qlr9s" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.720092 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlr9s" event={"ID":"86067e48-6b0c-408d-be6c-cd65aff8ef97","Type":"ContainerDied","Data":"276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6"} Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.720189 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-qlr9s" event={"ID":"86067e48-6b0c-408d-be6c-cd65aff8ef97","Type":"ContainerDied","Data":"e93354022bb22bafffe1a831f0fe5ca018c1b1cf0557dea6212f23c700a325ff"} Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.720219 5117 scope.go:117] "RemoveContainer" containerID="276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.734029 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-catalog-content\") pod \"86067e48-6b0c-408d-be6c-cd65aff8ef97\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.734115 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hhdwr\" (UniqueName: \"kubernetes.io/projected/86067e48-6b0c-408d-be6c-cd65aff8ef97-kube-api-access-hhdwr\") pod \"86067e48-6b0c-408d-be6c-cd65aff8ef97\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.734160 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-utilities\") pod \"86067e48-6b0c-408d-be6c-cd65aff8ef97\" (UID: \"86067e48-6b0c-408d-be6c-cd65aff8ef97\") " Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.735515 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-utilities" (OuterVolumeSpecName: "utilities") pod "86067e48-6b0c-408d-be6c-cd65aff8ef97" (UID: "86067e48-6b0c-408d-be6c-cd65aff8ef97"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.738060 5117 scope.go:117] "RemoveContainer" containerID="9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.750337 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "86067e48-6b0c-408d-be6c-cd65aff8ef97" (UID: "86067e48-6b0c-408d-be6c-cd65aff8ef97"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.752817 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86067e48-6b0c-408d-be6c-cd65aff8ef97-kube-api-access-hhdwr" (OuterVolumeSpecName: "kube-api-access-hhdwr") pod "86067e48-6b0c-408d-be6c-cd65aff8ef97" (UID: "86067e48-6b0c-408d-be6c-cd65aff8ef97"). InnerVolumeSpecName "kube-api-access-hhdwr". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.767499 5117 scope.go:117] "RemoveContainer" containerID="296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.784306 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wt6p"] Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.784906 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-9wt6p" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="registry-server" containerID="cri-o://477da4f4cc439582026cad6360b67b5eeca0acf66606489dd6bc715616ee12e5" gracePeriod=2 Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.793108 5117 scope.go:117] "RemoveContainer" containerID="276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6" Jan 23 08:55:56 crc kubenswrapper[5117]: E0123 08:55:56.795628 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6\": container with ID starting with 276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6 not found: ID does not exist" containerID="276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.795665 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6"} err="failed to get container status \"276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6\": rpc error: code = NotFound desc = could not find container \"276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6\": container with ID starting with 276461a2c6b8cf7689663c5f5322b55346d649206f82db0ee9984cfb11bbf1f6 not found: ID does not exist" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.795718 5117 scope.go:117] "RemoveContainer" containerID="9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115" Jan 23 08:55:56 crc kubenswrapper[5117]: E0123 08:55:56.796073 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115\": container with ID starting with 9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115 not found: ID does not exist" containerID="9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.796114 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115"} err="failed to get container status \"9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115\": rpc error: code = NotFound desc = could not find container \"9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115\": container with ID starting with 9fb530287202cf5d8e80daaecd0f708dcd3e42937067e6d054afad7c9aed3115 not found: ID does not exist" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.796165 5117 scope.go:117] "RemoveContainer" containerID="296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26" Jan 23 08:55:56 crc kubenswrapper[5117]: E0123 08:55:56.796562 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26\": container with ID starting with 296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26 not found: ID does not exist" containerID="296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.796585 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26"} err="failed to get container status \"296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26\": rpc error: code = NotFound desc = could not find container \"296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26\": container with ID starting with 296d6f66f97b11c41ee4441cf8d031bd8c8bcc6e52448a7558bf88220f4eab26 not found: ID does not exist" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.836253 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.836286 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hhdwr\" (UniqueName: \"kubernetes.io/projected/86067e48-6b0c-408d-be6c-cd65aff8ef97-kube-api-access-hhdwr\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.836297 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/86067e48-6b0c-408d-be6c-cd65aff8ef97-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:56 crc kubenswrapper[5117]: I0123 08:55:56.937919 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.036655 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlr9s"] Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.038902 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504d8572-4511-482a-a68e-5dba24831f31-kube-api-access\") pod \"504d8572-4511-482a-a68e-5dba24831f31\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.038982 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504d8572-4511-482a-a68e-5dba24831f31-kubelet-dir\") pod \"504d8572-4511-482a-a68e-5dba24831f31\" (UID: \"504d8572-4511-482a-a68e-5dba24831f31\") " Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.039217 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/504d8572-4511-482a-a68e-5dba24831f31-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "504d8572-4511-482a-a68e-5dba24831f31" (UID: "504d8572-4511-482a-a68e-5dba24831f31"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.039229 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-qlr9s"] Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.044096 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/504d8572-4511-482a-a68e-5dba24831f31-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "504d8572-4511-482a-a68e-5dba24831f31" (UID: "504d8572-4511-482a-a68e-5dba24831f31"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.140326 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/504d8572-4511-482a-a68e-5dba24831f31-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.140363 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/504d8572-4511-482a-a68e-5dba24831f31-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.532525 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533350 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="504d8572-4511-482a-a68e-5dba24831f31" containerName="pruner" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533366 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="504d8572-4511-482a-a68e-5dba24831f31" containerName="pruner" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533399 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="extract-content" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533413 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="extract-content" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533430 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="extract-utilities" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533437 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="extract-utilities" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533455 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="registry-server" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533462 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="registry-server" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533565 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" containerName="registry-server" Jan 23 08:55:57 crc kubenswrapper[5117]: I0123 08:55:57.533577 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="504d8572-4511-482a-a68e-5dba24831f31" containerName="pruner" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.284770 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.286067 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/revision-pruner-12-crc" event={"ID":"504d8572-4511-482a-a68e-5dba24831f31","Type":"ContainerDied","Data":"78cefa90dacfa15d0051112bc0a6e109d9dadb545265032b5ebed3a125639bd7"} Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.286096 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="78cefa90dacfa15d0051112bc0a6e109d9dadb545265032b5ebed3a125639bd7" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.286264 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/revision-pruner-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.287231 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.290524 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver\"/\"installer-sa-dockercfg-bqqnb\"" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.290660 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.352519 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-var-lock\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.352856 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aff08e0-04ca-4004-8505-2a786aec8e92-kube-api-access\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.352902 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-kubelet-dir\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.454114 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-var-lock\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.454536 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aff08e0-04ca-4004-8505-2a786aec8e92-kube-api-access\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.454582 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-kubelet-dir\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.454648 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-var-lock\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.454720 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-kubelet-dir\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.486791 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aff08e0-04ca-4004-8505-2a786aec8e92-kube-api-access\") pod \"installer-12-crc\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.623724 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.758874 5117 generic.go:358] "Generic (PLEG): container finished" podID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerID="477da4f4cc439582026cad6360b67b5eeca0acf66606489dd6bc715616ee12e5" exitCode=0 Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.758908 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wt6p" event={"ID":"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a","Type":"ContainerDied","Data":"477da4f4cc439582026cad6360b67b5eeca0acf66606489dd6bc715616ee12e5"} Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.778105 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86067e48-6b0c-408d-be6c-cd65aff8ef97" path="/var/lib/kubelet/pods/86067e48-6b0c-408d-be6c-cd65aff8ef97/volumes" Jan 23 08:55:58 crc kubenswrapper[5117]: I0123 08:55:58.852012 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-kube-apiserver/installer-12-crc"] Jan 23 08:55:58 crc kubenswrapper[5117]: W0123 08:55:58.862604 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-pod4aff08e0_04ca_4004_8505_2a786aec8e92.slice/crio-8e2501166e8e2370f6758c688dca7fb7556196b3166f38d9c3b1e0cf9d7aaefa WatchSource:0}: Error finding container 8e2501166e8e2370f6758c688dca7fb7556196b3166f38d9c3b1e0cf9d7aaefa: Status 404 returned error can't find the container with id 8e2501166e8e2370f6758c688dca7fb7556196b3166f38d9c3b1e0cf9d7aaefa Jan 23 08:55:59 crc kubenswrapper[5117]: I0123 08:55:59.642113 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:55:59 crc kubenswrapper[5117]: I0123 08:55:59.765881 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4aff08e0-04ca-4004-8505-2a786aec8e92","Type":"ContainerStarted","Data":"8e2501166e8e2370f6758c688dca7fb7556196b3166f38d9c3b1e0cf9d7aaefa"} Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.328667 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.375839 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-l5l7t\" (UniqueName: \"kubernetes.io/projected/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-kube-api-access-l5l7t\") pod \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.375985 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-catalog-content\") pod \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.376024 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-utilities\") pod \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\" (UID: \"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a\") " Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.377366 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-utilities" (OuterVolumeSpecName: "utilities") pod "a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" (UID: "a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.381672 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-kube-api-access-l5l7t" (OuterVolumeSpecName: "kube-api-access-l5l7t") pod "a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" (UID: "a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a"). InnerVolumeSpecName "kube-api-access-l5l7t". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.478048 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.478098 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-l5l7t\" (UniqueName: \"kubernetes.io/projected/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-kube-api-access-l5l7t\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.533627 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.712704 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.714517 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.786378 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-9wt6p" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.788839 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-9wt6p" event={"ID":"a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a","Type":"ContainerDied","Data":"5c15c809bb925793321e61a2a1e7c1cedefb0d225157f7fdbe5ff8eb865c5fe3"} Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.788887 5117 scope.go:117] "RemoveContainer" containerID="477da4f4cc439582026cad6360b67b5eeca0acf66606489dd6bc715616ee12e5" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.801433 5117 scope.go:117] "RemoveContainer" containerID="2db8fae332e6cd7a8db783c4d65cb97a27f2183176cff56e451fa26e5e26382f" Jan 23 08:56:00 crc kubenswrapper[5117]: I0123 08:56:00.815754 5117 scope.go:117] "RemoveContainer" containerID="ba2e65e69c12425d479fef27af04ca01d5dc72c3c607309a583589a5fc986f3c" Jan 23 08:56:01 crc kubenswrapper[5117]: I0123 08:56:01.518762 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" (UID: "a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:01 crc kubenswrapper[5117]: I0123 08:56:01.606314 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:01 crc kubenswrapper[5117]: I0123 08:56:01.722415 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-9wt6p"] Jan 23 08:56:01 crc kubenswrapper[5117]: I0123 08:56:01.726550 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-9wt6p"] Jan 23 08:56:01 crc kubenswrapper[5117]: I0123 08:56:01.756559 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:56:02 crc kubenswrapper[5117]: I0123 08:56:02.779744 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" path="/var/lib/kubelet/pods/a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a/volumes" Jan 23 08:56:02 crc kubenswrapper[5117]: I0123 08:56:02.785943 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:56:02 crc kubenswrapper[5117]: I0123 08:56:02.799733 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4aff08e0-04ca-4004-8505-2a786aec8e92","Type":"ContainerStarted","Data":"0012b24ef7bcabb0d5d6f782464200a87d7f94b241a971bbf36fe9415300e030"} Jan 23 08:56:02 crc kubenswrapper[5117]: I0123 08:56:02.829339 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.008385 5117 patch_prober.go:28] interesting pod/package-server-manager-77f986bd66-ntn75 container/package-server-manager namespace/openshift-operator-lifecycle-manager: Liveness probe status=failure output="Get \"http://10.217.0.24:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" start-of-body= Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.008760 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-operator-lifecycle-manager/package-server-manager-77f986bd66-ntn75" podUID="b11d81e9-7489-4632-ab25-5a9fdc51a275" containerName="package-server-manager" probeResult="failure" output="Get \"http://10.217.0.24:8080/healthz\": context deadline exceeded (Client.Timeout exceeded while awaiting headers)" Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.245125 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cz52g"] Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.245568 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cz52g" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="registry-server" containerID="cri-o://95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0" gracePeriod=2 Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.258394 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/installer-12-crc" podStartSLOduration=8.258369774 podStartE2EDuration="8.258369774s" podCreationTimestamp="2026-01-23 08:55:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:56:05.251070135 +0000 UTC m=+177.007195171" watchObservedRunningTime="2026-01-23 08:56:05.258369774 +0000 UTC m=+177.014494820" Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.265859 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5dxg"] Jan 23 08:56:05 crc kubenswrapper[5117]: I0123 08:56:05.266345 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-k5dxg" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="registry-server" containerID="cri-o://c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b" gracePeriod=2 Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.187767 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.197589 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.269844 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-catalog-content\") pod \"853c0f27-c63d-47ba-844c-e0ae7a71b079\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.269913 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cz89r\" (UniqueName: \"kubernetes.io/projected/853c0f27-c63d-47ba-844c-e0ae7a71b079-kube-api-access-cz89r\") pod \"853c0f27-c63d-47ba-844c-e0ae7a71b079\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.269950 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-utilities\") pod \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.270013 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r2v64\" (UniqueName: \"kubernetes.io/projected/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-kube-api-access-r2v64\") pod \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.270031 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-catalog-content\") pod \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\" (UID: \"dea7fdf2-1eec-4ae5-972b-2dc333b46e24\") " Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.270111 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-utilities\") pod \"853c0f27-c63d-47ba-844c-e0ae7a71b079\" (UID: \"853c0f27-c63d-47ba-844c-e0ae7a71b079\") " Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.271163 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-utilities" (OuterVolumeSpecName: "utilities") pod "dea7fdf2-1eec-4ae5-972b-2dc333b46e24" (UID: "dea7fdf2-1eec-4ae5-972b-2dc333b46e24"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.276201 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/853c0f27-c63d-47ba-844c-e0ae7a71b079-kube-api-access-cz89r" (OuterVolumeSpecName: "kube-api-access-cz89r") pod "853c0f27-c63d-47ba-844c-e0ae7a71b079" (UID: "853c0f27-c63d-47ba-844c-e0ae7a71b079"). InnerVolumeSpecName "kube-api-access-cz89r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.276479 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-kube-api-access-r2v64" (OuterVolumeSpecName: "kube-api-access-r2v64") pod "dea7fdf2-1eec-4ae5-972b-2dc333b46e24" (UID: "dea7fdf2-1eec-4ae5-972b-2dc333b46e24"). InnerVolumeSpecName "kube-api-access-r2v64". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.277021 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-utilities" (OuterVolumeSpecName: "utilities") pod "853c0f27-c63d-47ba-844c-e0ae7a71b079" (UID: "853c0f27-c63d-47ba-844c-e0ae7a71b079"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.302569 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "dea7fdf2-1eec-4ae5-972b-2dc333b46e24" (UID: "dea7fdf2-1eec-4ae5-972b-2dc333b46e24"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.328205 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "853c0f27-c63d-47ba-844c-e0ae7a71b079" (UID: "853c0f27-c63d-47ba-844c-e0ae7a71b079"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.372299 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.372334 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/853c0f27-c63d-47ba-844c-e0ae7a71b079-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.372348 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cz89r\" (UniqueName: \"kubernetes.io/projected/853c0f27-c63d-47ba-844c-e0ae7a71b079-kube-api-access-cz89r\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.373247 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.373284 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r2v64\" (UniqueName: \"kubernetes.io/projected/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-kube-api-access-r2v64\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.373295 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/dea7fdf2-1eec-4ae5-972b-2dc333b46e24-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.822907 5117 generic.go:358] "Generic (PLEG): container finished" podID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerID="95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0" exitCode=0 Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.822960 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz52g" event={"ID":"dea7fdf2-1eec-4ae5-972b-2dc333b46e24","Type":"ContainerDied","Data":"95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0"} Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.823056 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cz52g" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.823381 5117 scope.go:117] "RemoveContainer" containerID="95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.823362 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cz52g" event={"ID":"dea7fdf2-1eec-4ae5-972b-2dc333b46e24","Type":"ContainerDied","Data":"18a45e4a8d81452c06de309d3ca67a6cae5a11074c5515241e64a0334e7e1935"} Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.828263 5117 generic.go:358] "Generic (PLEG): container finished" podID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerID="c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b" exitCode=0 Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.828336 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5dxg" event={"ID":"853c0f27-c63d-47ba-844c-e0ae7a71b079","Type":"ContainerDied","Data":"c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b"} Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.828367 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-k5dxg" event={"ID":"853c0f27-c63d-47ba-844c-e0ae7a71b079","Type":"ContainerDied","Data":"029205d2f175976cd0271093e3922e6037e61a4f7b2cb6a46e42d944ab050bf1"} Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.828533 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-k5dxg" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.853480 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cz52g"] Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.856598 5117 scope.go:117] "RemoveContainer" containerID="8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.858575 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cz52g"] Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.860848 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-k5dxg"] Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.863222 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-k5dxg"] Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.869672 5117 scope.go:117] "RemoveContainer" containerID="9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.888235 5117 scope.go:117] "RemoveContainer" containerID="95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0" Jan 23 08:56:06 crc kubenswrapper[5117]: E0123 08:56:06.888697 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0\": container with ID starting with 95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0 not found: ID does not exist" containerID="95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.888733 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0"} err="failed to get container status \"95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0\": rpc error: code = NotFound desc = could not find container \"95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0\": container with ID starting with 95507ccfb889687e017214495671c2b7363444f54637a0c39d65ee271fc043b0 not found: ID does not exist" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.888759 5117 scope.go:117] "RemoveContainer" containerID="8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287" Jan 23 08:56:06 crc kubenswrapper[5117]: E0123 08:56:06.889061 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287\": container with ID starting with 8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287 not found: ID does not exist" containerID="8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.889083 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287"} err="failed to get container status \"8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287\": rpc error: code = NotFound desc = could not find container \"8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287\": container with ID starting with 8ed8f6dc5bcc21b68f1966e99694a9e31d5ba02091422e28e381f9146d43b287 not found: ID does not exist" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.889098 5117 scope.go:117] "RemoveContainer" containerID="9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1" Jan 23 08:56:06 crc kubenswrapper[5117]: E0123 08:56:06.889407 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1\": container with ID starting with 9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1 not found: ID does not exist" containerID="9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.889429 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1"} err="failed to get container status \"9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1\": rpc error: code = NotFound desc = could not find container \"9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1\": container with ID starting with 9d8bd79e9b59c12122f51f0aec306fef7c00a68cf609066d03676734bd92a5e1 not found: ID does not exist" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.889447 5117 scope.go:117] "RemoveContainer" containerID="c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.904249 5117 scope.go:117] "RemoveContainer" containerID="ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.921228 5117 scope.go:117] "RemoveContainer" containerID="bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.938694 5117 scope.go:117] "RemoveContainer" containerID="c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b" Jan 23 08:56:06 crc kubenswrapper[5117]: E0123 08:56:06.939299 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b\": container with ID starting with c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b not found: ID does not exist" containerID="c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.939331 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b"} err="failed to get container status \"c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b\": rpc error: code = NotFound desc = could not find container \"c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b\": container with ID starting with c282076022cf14db803ea950b1bd30d8dcd2dec380026f23c192b80d770d226b not found: ID does not exist" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.939378 5117 scope.go:117] "RemoveContainer" containerID="ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2" Jan 23 08:56:06 crc kubenswrapper[5117]: E0123 08:56:06.939984 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2\": container with ID starting with ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2 not found: ID does not exist" containerID="ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.940041 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2"} err="failed to get container status \"ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2\": rpc error: code = NotFound desc = could not find container \"ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2\": container with ID starting with ba1d9fb43066ebb7598d752e57150dff07bb0dca9e7fb7a105125586395570c2 not found: ID does not exist" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.940079 5117 scope.go:117] "RemoveContainer" containerID="bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52" Jan 23 08:56:06 crc kubenswrapper[5117]: E0123 08:56:06.940494 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52\": container with ID starting with bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52 not found: ID does not exist" containerID="bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52" Jan 23 08:56:06 crc kubenswrapper[5117]: I0123 08:56:06.940523 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52"} err="failed to get container status \"bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52\": rpc error: code = NotFound desc = could not find container \"bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52\": container with ID starting with bb4ba536a01ab3ccccc836fef5d8cc342353c32338768bc14e849b661c48cc52 not found: ID does not exist" Jan 23 08:56:07 crc kubenswrapper[5117]: I0123 08:56:07.659613 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 08:56:08 crc kubenswrapper[5117]: I0123 08:56:08.780692 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" path="/var/lib/kubelet/pods/853c0f27-c63d-47ba-844c-e0ae7a71b079/volumes" Jan 23 08:56:08 crc kubenswrapper[5117]: I0123 08:56:08.781777 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" path="/var/lib/kubelet/pods/dea7fdf2-1eec-4ae5-972b-2dc333b46e24/volumes" Jan 23 08:56:17 crc kubenswrapper[5117]: I0123 08:56:17.656800 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-network-diagnostics/network-check-target-fhkjl" Jan 23 08:56:20 crc kubenswrapper[5117]: I0123 08:56:20.145273 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-authentication/oauth-openshift-66458b6674-5dggj"] Jan 23 08:56:25 crc kubenswrapper[5117]: I0123 08:56:25.556548 5117 ???:1] "http: TLS handshake error from 192.168.126.11:50992: no serving certificate available for the kubelet" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.052024 5117 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053243 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="extract-utilities" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053263 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="extract-utilities" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053291 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="extract-content" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053300 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="extract-content" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053317 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="extract-utilities" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053325 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="extract-utilities" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053335 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="extract-content" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053342 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="extract-content" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053353 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053361 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053376 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="extract-utilities" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053385 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="extract-utilities" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053398 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="extract-content" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053405 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="extract-content" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053419 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053426 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053443 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053452 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053578 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="a9f767c0-2126-4f7d-bbdb-4c2ee6857b4a" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053592 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="dea7fdf2-1eec-4ae5-972b-2dc333b46e24" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.053608 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="853c0f27-c63d-47ba-844c-e0ae7a71b079" containerName="registry-server" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094230 5117 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094280 5117 kubelet.go:2537] "SyncLoop ADD" source="file" pods=["openshift-kube-apiserver/kube-apiserver-crc"] Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094475 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094817 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" containerID="cri-o://510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139" gracePeriod=15 Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094884 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" containerID="cri-o://2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b" gracePeriod=15 Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094915 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" containerID="cri-o://a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8" gracePeriod=15 Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094845 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" containerID="cri-o://b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7" gracePeriod=15 Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.094951 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" containerID="cri-o://47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6" gracePeriod=15 Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095710 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095758 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095771 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095778 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095798 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095807 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095848 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095856 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="setup" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095874 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095882 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095891 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095922 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095947 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095954 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095965 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.095973 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096200 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096214 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096226 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096235 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-regeneration-controller" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096276 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096285 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-insecure-readyz" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096298 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-cert-syncer" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096311 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096521 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096533 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096544 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096551 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.096809 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3a14caf222afb62aaabdc47808b6f944" containerName="kube-apiserver-check-endpoints" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.098304 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="3a14caf222afb62aaabdc47808b6f944" podUID="57755cc5f99000cc11e193051474d4e2" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.132724 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.133514 5117 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.235765 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.235842 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.235881 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.235945 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.236004 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.236078 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.236127 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.236247 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.236306 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.236347 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.337829 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.337874 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.337902 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.337944 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.337972 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.337991 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338009 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338051 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338059 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338027 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-resource-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338125 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338201 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338225 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338225 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338252 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-cert-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338294 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338324 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"kube-apiserver-startup-monitor-crc\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338510 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-ca-bundle-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338521 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/57755cc5f99000cc11e193051474d4e2-tmp-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.338590 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/57755cc5f99000cc11e193051474d4e2-audit-dir\") pod \"kube-apiserver-crc\" (UID: \"57755cc5f99000cc11e193051474d4e2\") " pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: I0123 08:56:40.465606 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.502362 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d505fae32cdf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:56:40.501153268 +0000 UTC m=+212.257278294,LastTimestamp:2026-01-23 08:56:40.501153268 +0000 UTC m=+212.257278294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.613554 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:40Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:40Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:40Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:40Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.613978 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.614218 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.614473 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.614734 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:40 crc kubenswrapper[5117]: E0123 08:56:40.614754 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.014735 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-check-endpoints/3.log" Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.017076 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.018095 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b" exitCode=0 Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.018156 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7" exitCode=0 Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.018175 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6" exitCode=0 Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.018187 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8" exitCode=2 Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.018216 5117 scope.go:117] "RemoveContainer" containerID="f1d86a166378dea1310bdb06761411da9d74d4ad77b74e00324386a8b2923b3d" Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.021165 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"886268cc908fe0c4b7e2c235665fe12d7f7600d63542a2c6b552dee61e56aeec"} Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.021213 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" event={"ID":"f7dbc7e1ee9c187a863ef9b473fad27b","Type":"ContainerStarted","Data":"bf32df8e3c878a1d8a700f0c1f162145a621b11a8e47e06ca11c56ffd382fa56"} Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.021430 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:41 crc kubenswrapper[5117]: E0123 08:56:41.021996 5117 kubelet.go:3342] "Failed creating a mirror pod" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.023060 5117 generic.go:358] "Generic (PLEG): container finished" podID="4aff08e0-04ca-4004-8505-2a786aec8e92" containerID="0012b24ef7bcabb0d5d6f782464200a87d7f94b241a971bbf36fe9415300e030" exitCode=0 Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.023147 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4aff08e0-04ca-4004-8505-2a786aec8e92","Type":"ContainerDied","Data":"0012b24ef7bcabb0d5d6f782464200a87d7f94b241a971bbf36fe9415300e030"} Jan 23 08:56:41 crc kubenswrapper[5117]: I0123 08:56:41.023720 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.032337 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.434079 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.434908 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.491651 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.492515 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.493353 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.493755 5117 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499107 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-var-lock\") pod \"4aff08e0-04ca-4004-8505-2a786aec8e92\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499196 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-var-lock" (OuterVolumeSpecName: "var-lock") pod "4aff08e0-04ca-4004-8505-2a786aec8e92" (UID: "4aff08e0-04ca-4004-8505-2a786aec8e92"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499295 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aff08e0-04ca-4004-8505-2a786aec8e92-kube-api-access\") pod \"4aff08e0-04ca-4004-8505-2a786aec8e92\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499331 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-kubelet-dir\") pod \"4aff08e0-04ca-4004-8505-2a786aec8e92\" (UID: \"4aff08e0-04ca-4004-8505-2a786aec8e92\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499382 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-kubelet-dir" (OuterVolumeSpecName: "kubelet-dir") pod "4aff08e0-04ca-4004-8505-2a786aec8e92" (UID: "4aff08e0-04ca-4004-8505-2a786aec8e92"). InnerVolumeSpecName "kubelet-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499585 5117 reconciler_common.go:299] "Volume detached for volume \"kubelet-dir\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-kubelet-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.499606 5117 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/4aff08e0-04ca-4004-8505-2a786aec8e92-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.504919 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4aff08e0-04ca-4004-8505-2a786aec8e92-kube-api-access" (OuterVolumeSpecName: "kube-api-access") pod "4aff08e0-04ca-4004-8505-2a786aec8e92" (UID: "4aff08e0-04ca-4004-8505-2a786aec8e92"). InnerVolumeSpecName "kube-api-access". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601319 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601685 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601454 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601778 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601813 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601837 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") pod \"3a14caf222afb62aaabdc47808b6f944\" (UID: \"3a14caf222afb62aaabdc47808b6f944\") " Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.601886 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.602028 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir" (OuterVolumeSpecName: "cert-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "cert-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.602103 5117 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.602115 5117 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.602123 5117 reconciler_common.go:299] "Volume detached for volume \"cert-dir\" (UniqueName: \"kubernetes.io/host-path/3a14caf222afb62aaabdc47808b6f944-cert-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.602149 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access\" (UniqueName: \"kubernetes.io/projected/4aff08e0-04ca-4004-8505-2a786aec8e92-kube-api-access\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.602506 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir" (OuterVolumeSpecName: "ca-bundle-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "ca-bundle-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.606878 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir" (OuterVolumeSpecName: "tmp-dir") pod "3a14caf222afb62aaabdc47808b6f944" (UID: "3a14caf222afb62aaabdc47808b6f944"). InnerVolumeSpecName "tmp-dir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.703057 5117 reconciler_common.go:299] "Volume detached for volume \"ca-bundle-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-ca-bundle-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.703100 5117 reconciler_common.go:299] "Volume detached for volume \"tmp-dir\" (UniqueName: \"kubernetes.io/empty-dir/3a14caf222afb62aaabdc47808b6f944-tmp-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:42 crc kubenswrapper[5117]: I0123 08:56:42.778379 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3a14caf222afb62aaabdc47808b6f944" path="/var/lib/kubelet/pods/3a14caf222afb62aaabdc47808b6f944/volumes" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.042941 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/installer-12-crc" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.042979 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/installer-12-crc" event={"ID":"4aff08e0-04ca-4004-8505-2a786aec8e92","Type":"ContainerDied","Data":"8e2501166e8e2370f6758c688dca7fb7556196b3166f38d9c3b1e0cf9d7aaefa"} Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.043033 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e2501166e8e2370f6758c688dca7fb7556196b3166f38d9c3b1e0cf9d7aaefa" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.047757 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-crc_3a14caf222afb62aaabdc47808b6f944/kube-apiserver-cert-syncer/0.log" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.048051 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.048480 5117 generic.go:358] "Generic (PLEG): container finished" podID="3a14caf222afb62aaabdc47808b6f944" containerID="510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139" exitCode=0 Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.048578 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.048588 5117 scope.go:117] "RemoveContainer" containerID="2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.049126 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.049351 5117 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.051070 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.051362 5117 status_manager.go:895] "Failed to get status for pod" podUID="3a14caf222afb62aaabdc47808b6f944" pod="openshift-kube-apiserver/kube-apiserver-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.067560 5117 scope.go:117] "RemoveContainer" containerID="b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.082419 5117 scope.go:117] "RemoveContainer" containerID="47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.098502 5117 scope.go:117] "RemoveContainer" containerID="a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.125779 5117 scope.go:117] "RemoveContainer" containerID="510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.129003 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.129266 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.129528 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.129762 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.130035 5117 controller.go:195] "Failed to update lease" err="Put \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.130081 5117 controller.go:115] "failed to update lease using latest lease, fallback to ensure lease" err="failed 5 attempts to update lease" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.130405 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="200ms" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.149244 5117 scope.go:117] "RemoveContainer" containerID="7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.209922 5117 scope.go:117] "RemoveContainer" containerID="2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.210446 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b\": container with ID starting with 2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b not found: ID does not exist" containerID="2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.210478 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b"} err="failed to get container status \"2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b\": rpc error: code = NotFound desc = could not find container \"2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b\": container with ID starting with 2cf16f1fe33ff0dd68866db3d175db12714304f1dcaf070c1452014dd93d985b not found: ID does not exist" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.210592 5117 scope.go:117] "RemoveContainer" containerID="b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.213083 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7\": container with ID starting with b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7 not found: ID does not exist" containerID="b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.213164 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7"} err="failed to get container status \"b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7\": rpc error: code = NotFound desc = could not find container \"b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7\": container with ID starting with b2a53c8bb43bcd7a958411cb5eaa0c11362966221e3f30ee0c9af25da7dba1b7 not found: ID does not exist" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.213189 5117 scope.go:117] "RemoveContainer" containerID="47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.213706 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6\": container with ID starting with 47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6 not found: ID does not exist" containerID="47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.213741 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6"} err="failed to get container status \"47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6\": rpc error: code = NotFound desc = could not find container \"47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6\": container with ID starting with 47d59dc2effef49f79c3ecda510d997fd8c5f0dd382be9c1d591b8f344d1bbb6 not found: ID does not exist" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.213760 5117 scope.go:117] "RemoveContainer" containerID="a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.213949 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8\": container with ID starting with a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8 not found: ID does not exist" containerID="a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.213971 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8"} err="failed to get container status \"a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8\": rpc error: code = NotFound desc = could not find container \"a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8\": container with ID starting with a1df74c49404a351b7280c8517d775d65fcc3e1bb2e496e39b8e7fba7f3e68a8 not found: ID does not exist" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.213983 5117 scope.go:117] "RemoveContainer" containerID="510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.214183 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139\": container with ID starting with 510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139 not found: ID does not exist" containerID="510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.214205 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139"} err="failed to get container status \"510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139\": rpc error: code = NotFound desc = could not find container \"510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139\": container with ID starting with 510a92f571185d4f2db951e47ee4f57cc0f0626f05d7f5664303d2764b82c139 not found: ID does not exist" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.214222 5117 scope.go:117] "RemoveContainer" containerID="7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.214386 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b\": container with ID starting with 7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b not found: ID does not exist" containerID="7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b" Jan 23 08:56:43 crc kubenswrapper[5117]: I0123 08:56:43.214407 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b"} err="failed to get container status \"7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b\": rpc error: code = NotFound desc = could not find container \"7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b\": container with ID starting with 7a1d755107bd753c8612266d3a81209ccdc515132f83a40bdb3f637678a88e8b not found: ID does not exist" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.332160 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="400ms" Jan 23 08:56:43 crc kubenswrapper[5117]: E0123 08:56:43.733653 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="800ms" Jan 23 08:56:44 crc kubenswrapper[5117]: E0123 08:56:44.535100 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="1.6s" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.063267 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.063659 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.201851 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" containerName="oauth-openshift" containerID="cri-o://0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e" gracePeriod=15 Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.584974 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.585523 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.585691 5117 status_manager.go:895] "Failed to get status for pod" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-5dggj\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645073 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-router-certs\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645178 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-policies\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645240 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-serving-cert\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645293 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-provider-selection\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645338 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qj4zb\" (UniqueName: \"kubernetes.io/projected/f399bc34-e976-4cd2-90df-40a3d08fb983-kube-api-access-qj4zb\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645357 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-idp-0-file-data\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645383 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-error\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645416 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-service-ca\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645444 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-session\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645471 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-dir\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645487 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-cliconfig\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645507 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-login\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645531 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-ocp-branding-template\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.645560 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-trusted-ca-bundle\") pod \"f399bc34-e976-4cd2-90df-40a3d08fb983\" (UID: \"f399bc34-e976-4cd2-90df-40a3d08fb983\") " Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.646516 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-dir" (OuterVolumeSpecName: "audit-dir") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "audit-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.647187 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-service-ca" (OuterVolumeSpecName: "v4-0-config-system-service-ca") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-service-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.647558 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-cliconfig" (OuterVolumeSpecName: "v4-0-config-system-cliconfig") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-cliconfig". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.647915 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-trusted-ca-bundle" (OuterVolumeSpecName: "v4-0-config-system-trusted-ca-bundle") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-trusted-ca-bundle". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.647993 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-policies" (OuterVolumeSpecName: "audit-policies") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "audit-policies". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.652697 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-error" (OuterVolumeSpecName: "v4-0-config-user-template-error") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-user-template-error". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.653567 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f399bc34-e976-4cd2-90df-40a3d08fb983-kube-api-access-qj4zb" (OuterVolumeSpecName: "kube-api-access-qj4zb") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "kube-api-access-qj4zb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.653612 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-session" (OuterVolumeSpecName: "v4-0-config-system-session") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-session". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.653761 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-login" (OuterVolumeSpecName: "v4-0-config-user-template-login") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-user-template-login". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.654106 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-idp-0-file-data" (OuterVolumeSpecName: "v4-0-config-user-idp-0-file-data") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-user-idp-0-file-data". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.654319 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-serving-cert" (OuterVolumeSpecName: "v4-0-config-system-serving-cert") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.655412 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-router-certs" (OuterVolumeSpecName: "v4-0-config-system-router-certs") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-router-certs". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.655689 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-provider-selection" (OuterVolumeSpecName: "v4-0-config-user-template-provider-selection") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-user-template-provider-selection". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.655707 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-ocp-branding-template" (OuterVolumeSpecName: "v4-0-config-system-ocp-branding-template") pod "f399bc34-e976-4cd2-90df-40a3d08fb983" (UID: "f399bc34-e976-4cd2-90df-40a3d08fb983"). InnerVolumeSpecName "v4-0-config-system-ocp-branding-template". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.747905 5117 reconciler_common.go:299] "Volume detached for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-policies\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.747955 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.747978 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-provider-selection\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.747995 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qj4zb\" (UniqueName: \"kubernetes.io/projected/f399bc34-e976-4cd2-90df-40a3d08fb983-kube-api-access-qj4zb\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748007 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-idp-0-file-data\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748018 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-error\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748029 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-service-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748043 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-session\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748054 5117 reconciler_common.go:299] "Volume detached for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/f399bc34-e976-4cd2-90df-40a3d08fb983-audit-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748064 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-cliconfig\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748075 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-user-template-login\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748086 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-ocp-branding-template\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748098 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-trusted-ca-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:45 crc kubenswrapper[5117]: I0123 08:56:45.748109 5117 reconciler_common.go:299] "Volume detached for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/f399bc34-e976-4cd2-90df-40a3d08fb983-v4-0-config-system-router-certs\") on node \"crc\" DevicePath \"\"" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.069784 5117 generic.go:358] "Generic (PLEG): container finished" podID="f399bc34-e976-4cd2-90df-40a3d08fb983" containerID="0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e" exitCode=0 Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.069868 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.069877 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" event={"ID":"f399bc34-e976-4cd2-90df-40a3d08fb983","Type":"ContainerDied","Data":"0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e"} Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.069909 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" event={"ID":"f399bc34-e976-4cd2-90df-40a3d08fb983","Type":"ContainerDied","Data":"9dd78f0328d0e245357f6ea37c38e562b0b33e8122be8125b5adfe45a8f60b56"} Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.069928 5117 scope.go:117] "RemoveContainer" containerID="0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.071252 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.071967 5117 status_manager.go:895] "Failed to get status for pod" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-5dggj\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.083841 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.084216 5117 status_manager.go:895] "Failed to get status for pod" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-5dggj\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.089318 5117 scope.go:117] "RemoveContainer" containerID="0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e" Jan 23 08:56:46 crc kubenswrapper[5117]: E0123 08:56:46.089913 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e\": container with ID starting with 0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e not found: ID does not exist" containerID="0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e" Jan 23 08:56:46 crc kubenswrapper[5117]: I0123 08:56:46.090043 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e"} err="failed to get container status \"0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e\": rpc error: code = NotFound desc = could not find container \"0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e\": container with ID starting with 0d450f443663e92eccdbbaaaef1df6a6bb3e19de9c482ffc2cddcaddbf27d99e not found: ID does not exist" Jan 23 08:56:46 crc kubenswrapper[5117]: E0123 08:56:46.135964 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="3.2s" Jan 23 08:56:46 crc kubenswrapper[5117]: E0123 08:56:46.225887 5117 event.go:368] "Unable to write event (may retry after sleeping)" err="Post \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/events\": dial tcp 38.102.83.230:6443: connect: connection refused" event="&Event{ObjectMeta:{kube-apiserver-startup-monitor-crc.188d505fae32cdf4 openshift-kube-apiserver 0 0001-01-01 00:00:00 +0000 UTC map[] map[] [] [] []},InvolvedObject:ObjectReference{Kind:Pod,Namespace:openshift-kube-apiserver,Name:kube-apiserver-startup-monitor-crc,UID:f7dbc7e1ee9c187a863ef9b473fad27b,APIVersion:v1,ResourceVersion:,FieldPath:spec.containers{startup-monitor},},Reason:Pulled,Message:Container image \"quay.io/crcont/openshift-crc-cluster-kube-apiserver-operator@sha256:68c07ee2fb6450c7b3b35bfdfc158dc475aaa0bcf9fba28b5e310d7e03355c04\" already present on machine,Source:EventSource{Component:kubelet,Host:crc,},FirstTimestamp:2026-01-23 08:56:40.501153268 +0000 UTC m=+212.257278294,LastTimestamp:2026-01-23 08:56:40.501153268 +0000 UTC m=+212.257278294,Count:1,Type:Normal,EventTime:0001-01-01 00:00:00 +0000 UTC,Series:nil,Action:,Related:nil,ReportingController:kubelet,ReportingInstance:crc,}" Jan 23 08:56:48 crc kubenswrapper[5117]: I0123 08:56:48.775036 5117 status_manager.go:895] "Failed to get status for pod" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-5dggj\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:48 crc kubenswrapper[5117]: I0123 08:56:48.776189 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:49 crc kubenswrapper[5117]: E0123 08:56:49.336951 5117 controller.go:145] "Failed to ensure lease exists, will retry" err="Get \"https://api-int.crc.testing:6443/apis/coordination.k8s.io/v1/namespaces/kube-node-lease/leases/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" interval="6.4s" Jan 23 08:56:50 crc kubenswrapper[5117]: I0123 08:56:50.770391 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:50 crc kubenswrapper[5117]: I0123 08:56:50.773003 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: I0123 08:56:50.773811 5117 status_manager.go:895] "Failed to get status for pod" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-5dggj\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: I0123 08:56:50.788078 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:50 crc kubenswrapper[5117]: I0123 08:56:50.788125 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.788960 5117 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:50 crc kubenswrapper[5117]: I0123 08:56:50.789631 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.891488 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="failed to patch status \"{\\\"status\\\":{\\\"$setElementOrder/conditions\\\":[{\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"type\\\":\\\"DiskPressure\\\"},{\\\"type\\\":\\\"PIDPressure\\\"},{\\\"type\\\":\\\"Ready\\\"}],\\\"conditions\\\":[{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:50Z\\\",\\\"type\\\":\\\"MemoryPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:50Z\\\",\\\"type\\\":\\\"DiskPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:50Z\\\",\\\"type\\\":\\\"PIDPressure\\\"},{\\\"lastHeartbeatTime\\\":\\\"2026-01-23T08:56:50Z\\\",\\\"type\\\":\\\"Ready\\\"}],\\\"runtimeHandlers\\\":[{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"crun\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"\\\"},{\\\"features\\\":{\\\"recursiveReadOnlyMounts\\\":true,\\\"userNamespaces\\\":true},\\\"name\\\":\\\"runc\\\"}]}}\" for node \"crc\": Patch \"https://api-int.crc.testing:6443/api/v1/nodes/crc/status?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.891886 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.892239 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.892532 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.892827 5117 kubelet_node_status.go:597] "Error updating node status, will retry" err="error getting node \"crc\": Get \"https://api-int.crc.testing:6443/api/v1/nodes/crc?timeout=10s\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:50 crc kubenswrapper[5117]: E0123 08:56:50.892851 5117 kubelet_node_status.go:584] "Unable to update node status" err="update node status exceeds retry count" Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.107918 5117 generic.go:358] "Generic (PLEG): container finished" podID="57755cc5f99000cc11e193051474d4e2" containerID="cd74626850035d66ae15ce411d5b3efb1d4cd9813ac7a894252a13e708d99522" exitCode=0 Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.108175 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerDied","Data":"cd74626850035d66ae15ce411d5b3efb1d4cd9813ac7a894252a13e708d99522"} Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.108417 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"665e2be884ccbd1d8a490da7f1b9b48245f3d8e59915e77c5ee595f8ccf3a010"} Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.108715 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.108735 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:51 crc kubenswrapper[5117]: E0123 08:56:51.109048 5117 mirror_client.go:138] "Failed deleting a mirror pod" err="Delete \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/kube-apiserver-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.109239 5117 status_manager.go:895] "Failed to get status for pod" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" pod="openshift-kube-apiserver/installer-12-crc" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-kube-apiserver/pods/installer-12-crc\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:51 crc kubenswrapper[5117]: I0123 08:56:51.109806 5117 status_manager.go:895] "Failed to get status for pod" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" pod="openshift-authentication/oauth-openshift-66458b6674-5dggj" err="Get \"https://api-int.crc.testing:6443/api/v1/namespaces/openshift-authentication/pods/oauth-openshift-66458b6674-5dggj\": dial tcp 38.102.83.230:6443: connect: connection refused" Jan 23 08:56:52 crc kubenswrapper[5117]: I0123 08:56:52.120658 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"437a159188a305d0359bf83ab3329c912b71ff1cda10d9254a808413c8f81e75"} Jan 23 08:56:52 crc kubenswrapper[5117]: I0123 08:56:52.120697 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"b87a0bc7e42e57aff8ee7ce50e3e2daa943a6ca2ea0932daf33a4b9d636b0619"} Jan 23 08:56:52 crc kubenswrapper[5117]: I0123 08:56:52.120708 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"7452ab89122318dee9bf72bf4cdcee5f4e59b627952461709b5afae7c6b5ec22"} Jan 23 08:56:52 crc kubenswrapper[5117]: I0123 08:56:52.120715 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"87b50a0765e98f81a08822377e04883ac54d6572018a64bbcb981761a4a0edfc"} Jan 23 08:56:53 crc kubenswrapper[5117]: I0123 08:56:53.127807 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-apiserver/kube-apiserver-crc" event={"ID":"57755cc5f99000cc11e193051474d4e2","Type":"ContainerStarted","Data":"429dc8675b16f0a5e1086561028c27559ea019ddb0b840812f16e0f1e4c3537c"} Jan 23 08:56:53 crc kubenswrapper[5117]: I0123 08:56:53.128095 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:53 crc kubenswrapper[5117]: I0123 08:56:53.128215 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:53 crc kubenswrapper[5117]: I0123 08:56:53.128242 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:54 crc kubenswrapper[5117]: I0123 08:56:54.135915 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 08:56:54 crc kubenswrapper[5117]: I0123 08:56:54.135990 5117 generic.go:358] "Generic (PLEG): container finished" podID="9f0bc7fcb0822a2c13eb2d22cd8c0641" containerID="250a0dfb6e83a36400a2c1fee2a02069a3ac81a98ee32800a2e3fc5895460f6a" exitCode=1 Jan 23 08:56:54 crc kubenswrapper[5117]: I0123 08:56:54.136104 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerDied","Data":"250a0dfb6e83a36400a2c1fee2a02069a3ac81a98ee32800a2e3fc5895460f6a"} Jan 23 08:56:54 crc kubenswrapper[5117]: I0123 08:56:54.136822 5117 scope.go:117] "RemoveContainer" containerID="250a0dfb6e83a36400a2c1fee2a02069a3ac81a98ee32800a2e3fc5895460f6a" Jan 23 08:56:55 crc kubenswrapper[5117]: I0123 08:56:55.145027 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 08:56:55 crc kubenswrapper[5117]: I0123 08:56:55.145551 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-kube-controller-manager/kube-controller-manager-crc" event={"ID":"9f0bc7fcb0822a2c13eb2d22cd8c0641","Type":"ContainerStarted","Data":"4c37ce584815d515750079a285cc904f728642cd6b9b6fe6176a2c57d0143f20"} Jan 23 08:56:55 crc kubenswrapper[5117]: I0123 08:56:55.791530 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:55 crc kubenswrapper[5117]: I0123 08:56:55.791581 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:55 crc kubenswrapper[5117]: I0123 08:56:55.797406 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:57 crc kubenswrapper[5117]: I0123 08:56:57.929362 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:56:58 crc kubenswrapper[5117]: I0123 08:56:58.139465 5117 kubelet.go:3329] "Deleted mirror pod as it didn't match the static Pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:58 crc kubenswrapper[5117]: I0123 08:56:58.139505 5117 kubelet.go:3340] "Creating a mirror pod for static pod" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:56:58 crc kubenswrapper[5117]: I0123 08:56:58.789485 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="75da1c1d-fb21-4c46-bc1f-efc1f1496722" Jan 23 08:56:59 crc kubenswrapper[5117]: I0123 08:56:59.166065 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:59 crc kubenswrapper[5117]: I0123 08:56:59.166093 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:56:59 crc kubenswrapper[5117]: I0123 08:56:59.169644 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="75da1c1d-fb21-4c46-bc1f-efc1f1496722" Jan 23 08:56:59 crc kubenswrapper[5117]: I0123 08:56:59.171151 5117 status_manager.go:346] "Container readiness changed before pod has synced" pod="openshift-kube-apiserver/kube-apiserver-crc" containerID="cri-o://87b50a0765e98f81a08822377e04883ac54d6572018a64bbcb981761a4a0edfc" Jan 23 08:56:59 crc kubenswrapper[5117]: I0123 08:56:59.171179 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:57:00 crc kubenswrapper[5117]: I0123 08:57:00.170645 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:57:00 crc kubenswrapper[5117]: I0123 08:57:00.170675 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:57:00 crc kubenswrapper[5117]: I0123 08:57:00.175692 5117 status_manager.go:905] "Pod was deleted and then recreated, skipping status update" pod="openshift-kube-apiserver/kube-apiserver-crc" oldPodUID="57755cc5f99000cc11e193051474d4e2" podUID="75da1c1d-fb21-4c46-bc1f-efc1f1496722" Jan 23 08:57:03 crc kubenswrapper[5117]: I0123 08:57:03.644909 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:57:03 crc kubenswrapper[5117]: I0123 08:57:03.662014 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:57:04 crc kubenswrapper[5117]: I0123 08:57:04.202538 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-controller-manager/kube-controller-manager-crc" Jan 23 08:57:04 crc kubenswrapper[5117]: I0123 08:57:04.474378 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 23 08:57:04 crc kubenswrapper[5117]: I0123 08:57:04.571880 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"openshift-service-ca.crt\"" Jan 23 08:57:04 crc kubenswrapper[5117]: I0123 08:57:04.706040 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"node-ca-dockercfg-tjs74\"" Jan 23 08:57:04 crc kubenswrapper[5117]: I0123 08:57:04.906956 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-dockercfg-gnx66\"" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.140756 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.477230 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-scheduler-operator-serving-cert\"" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.514988 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-oauth-config\"" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.536020 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-serving-cert\"" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.593068 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mcc-proxy-tls\"" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.764194 5117 reflector.go:430] "Caches populated" type="*v1.Service" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:57:05 crc kubenswrapper[5117]: I0123 08:57:05.944460 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"config\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.023014 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"serving-cert\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.263602 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-serving-ca\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.359573 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-dockercfg-2cfkp\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.361348 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.373359 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"console-operator-config\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.718377 5117 reflector.go:430] "Caches populated" type="*v1.CSIDriver" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.813926 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-sa-dockercfg-t8n29\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.838616 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-rbac-proxy\"" Jan 23 08:57:06 crc kubenswrapper[5117]: I0123 08:57:06.916235 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.012673 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"node-bootstrapper-token\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.014406 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"metrics-tls\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.232695 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.350555 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"catalog-operator-serving-cert\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.361187 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"openshift-apiserver-sa-dockercfg-4zqgh\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.390447 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"image-import-ca\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.490427 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"dns-default\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.522483 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"kube-root-ca.crt\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.678680 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"trusted-ca\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.708126 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-stats-default\"" Jan 23 08:57:07 crc kubenswrapper[5117]: I0123 08:57:07.971187 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"oauth-apiserver-sa-dockercfg-qqw4z\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.083289 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-storage-version-migrator-sa-dockercfg-kknhg\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.085287 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"ovnkube-identity-cm\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.114194 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"oauth-serving-cert\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.193299 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.355906 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.378662 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"node-resolver-dockercfg-tk7bt\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.487362 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-config\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.724987 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"trusted-ca-bundle\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.928341 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-tls\"" Jan 23 08:57:08 crc kubenswrapper[5117]: I0123 08:57:08.930659 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"env-overrides\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.268546 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"kube-root-ca.crt\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.291516 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"config\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.355405 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"kube-root-ca.crt\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.440547 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"trusted-ca\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.466422 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-tls\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.497158 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"serving-cert\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.620238 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"kube-root-ca.crt\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.699561 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"whereabouts-flatfile-config\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.700599 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"kube-root-ca.crt\"" Jan 23 08:57:09 crc kubenswrapper[5117]: I0123 08:57:09.951188 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"client-ca\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.009479 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"openshift-service-ca.crt\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.063628 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"openshift-config-operator-dockercfg-sjn6s\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.155724 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-dockercfg-bf7fj\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.227554 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"service-ca-bundle\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.245609 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.309092 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-rbac-proxy\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.342662 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"audit-1\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.409984 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"env-overrides\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.414748 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"serving-cert\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.439021 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-service-ca.crt\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.504739 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"audit-1\"" Jan 23 08:57:10 crc kubenswrapper[5117]: I0123 08:57:10.504898 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"iptables-alerter-script\"" Jan 23 08:57:11 crc kubenswrapper[5117]: I0123 08:57:11.083610 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:11 crc kubenswrapper[5117]: I0123 08:57:11.444380 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-dockercfg-bjqfd\"" Jan 23 08:57:11 crc kubenswrapper[5117]: I0123 08:57:11.499392 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:57:11 crc kubenswrapper[5117]: I0123 08:57:11.670367 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-config-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:11 crc kubenswrapper[5117]: I0123 08:57:11.950383 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-dockercfg-6n5ln\"" Jan 23 08:57:12 crc kubenswrapper[5117]: I0123 08:57:12.067611 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"service-ca-operator-config\"" Jan 23 08:57:12 crc kubenswrapper[5117]: I0123 08:57:12.258678 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"serving-cert\"" Jan 23 08:57:12 crc kubenswrapper[5117]: I0123 08:57:12.578171 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"openshift-service-ca.crt\"" Jan 23 08:57:12 crc kubenswrapper[5117]: I0123 08:57:12.666922 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"trusted-ca-bundle\"" Jan 23 08:57:12 crc kubenswrapper[5117]: I0123 08:57:12.705436 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serviceaccount-dockercfg-4gqzj\"" Jan 23 08:57:12 crc kubenswrapper[5117]: I0123 08:57:12.721992 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.035012 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"signing-cabundle\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.138688 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"olm-operator-serving-cert\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.231687 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"multus-daemon-config\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.233744 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-canary\"/\"openshift-service-ca.crt\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.298786 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.447171 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"openshift-service-ca.crt\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.559934 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"canary-serving-cert\"" Jan 23 08:57:13 crc kubenswrapper[5117]: I0123 08:57:13.995156 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"openshift-service-ca.crt\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.007083 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-serving-cert\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.037243 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"hostpath-provisioner\"/\"csi-hostpath-provisioner-sa-dockercfg-7dcws\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.163289 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-tls\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.247881 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-service-ca-bundle\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.611662 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-serving-cert\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.671721 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"installation-pull-secrets\"" Jan 23 08:57:14 crc kubenswrapper[5117]: I0123 08:57:14.770120 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"cluster-version-operator-serving-cert\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.011001 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.014455 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.063598 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.063680 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.221324 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver-operator\"/\"openshift-apiserver-operator-dockercfg-6c46w\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.259221 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"image-registry-operator-tls\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.265160 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"serving-cert\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.271070 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-dockercfg-kpvmz\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.319708 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-daemon-dockercfg-w9nzh\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.369022 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.373828 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-server-dockercfg-dzw6b\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.456539 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"marketplace-operator-metrics\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.464875 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"kube-root-ca.crt\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.465043 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-dockercfg-8dkm8\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.579161 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.626614 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-serving-cert\"" Jan 23 08:57:15 crc kubenswrapper[5117]: I0123 08:57:15.921408 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"kube-root-ca.crt\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.087204 5117 reflector.go:430] "Caches populated" type="*v1.RuntimeClass" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.254961 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.256079 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-controller-dockercfg-xnj77\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.266290 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-dockercfg-sw6nc\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.301361 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"service-ca-dockercfg-bgxvm\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.362890 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.412040 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-operator\"/\"metrics-tls\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.481392 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-client\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.506434 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"etcd-client\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.578065 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager\"/\"openshift-controller-manager-sa-dockercfg-djmfg\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.675001 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"kube-root-ca.crt\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.688540 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"packageserver-service-cert\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.841741 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"default-dockercfg-g6kgg\"" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.912107 5117 reflector.go:430] "Caches populated" type="*v1.Node" reflector="k8s.io/client-go/informers/factory.go:160" Jan 23 08:57:16 crc kubenswrapper[5117]: I0123 08:57:16.978405 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"machine-config-operator-images\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.066127 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.129435 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-certs-default\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.178902 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"client-ca\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.197842 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"kube-root-ca.crt\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.221507 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"serving-cert\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.374647 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-api\"/\"control-plane-machine-set-operator-tls\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.470856 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"etcd-serving-ca\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.754005 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-rbac-proxy\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.816618 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"trusted-ca-bundle\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.839987 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"service-ca\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.853776 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-dns\"/\"openshift-service-ca.crt\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.970774 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"trusted-ca-bundle\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.976663 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-config\"" Jan 23 08:57:17 crc kubenswrapper[5117]: I0123 08:57:17.998775 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.022401 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-node-dockercfg-l2v2m\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.081429 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"registry-dockercfg-6w67b\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.085375 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-tls\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.477322 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"openshift-service-ca.crt\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.503539 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ancillary-tools-dockercfg-nwglk\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.594086 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"kube-root-ca.crt\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.599806 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.651700 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"machine-api-operator-images\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.696249 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-diagnostics\"/\"kube-root-ca.crt\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.807688 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"serving-cert\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.942702 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-samples-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:18 crc kubenswrapper[5117]: I0123 08:57:18.982877 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.010921 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"cluster-samples-operator-dockercfg-jmhxf\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.027829 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-dockercfg-2wbn2\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.062806 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"kube-root-ca.crt\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.275737 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.321578 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-ca-bundle\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.360846 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca\"/\"openshift-service-ca.crt\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.368975 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"package-server-manager-serving-cert\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.437157 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-console\"/\"networking-console-plugin-cert\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.453866 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-dockercfg-jcmfj\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.542195 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"default-cni-sysctl-allowlist\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.738622 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-network-node-identity\"/\"network-node-identity-cert\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.922500 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-config\"" Jan 23 08:57:19 crc kubenswrapper[5117]: I0123 08:57:19.939418 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-console\"/\"networking-console-plugin\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.187251 5117 reflector.go:430] "Caches populated" logger="kubernetes.io/kubelet-serving" type="*v1.CertificateSigningRequest" reflector="k8s.io/client-go/tools/watch/informerwatcher.go:162" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.232584 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"metrics-daemon-secret\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.282623 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-dockercfg-tnfx9\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.454879 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-network-node-identity\"/\"openshift-service-ca.crt\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.524233 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"console-operator-dockercfg-kl6m8\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.605904 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.649112 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"etcd-client\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.680780 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.697946 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"openshift-service-ca.crt\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.757761 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-multus\"/\"cni-copy-resources\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.783993 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-api\"/\"kube-root-ca.crt\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.810758 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"pprof-cert\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.837041 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"openshift-service-ca.crt\"" Jan 23 08:57:20 crc kubenswrapper[5117]: I0123 08:57:20.969640 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"service-ca-bundle\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.048285 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.062685 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"openshift-service-ca.crt\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.100420 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-ac-dockercfg-gj7jx\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.139644 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-config-operator\"/\"config-operator-serving-cert\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.191824 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"default-dockercfg-mdwwj\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.320713 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-multus\"/\"multus-admission-controller-secret\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.400490 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-kubernetes-control-plane-dockercfg-nl8tp\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.415851 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console\"/\"console-serving-cert\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.464512 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-oauth-apiserver\"/\"openshift-service-ca.crt\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.580091 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-control-plane-metrics-cert\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.609801 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-config\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.646403 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.667685 5117 ???:1] "http: TLS handshake error from 192.168.126.11:55580: no serving certificate available for the kubelet" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.714215 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication-operator\"/\"authentication-operator-dockercfg-6tbpn\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.868158 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-route-controller-manager\"/\"config\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.894397 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ovn-kubernetes\"/\"ovn-node-metrics-cert\"" Jan 23 08:57:21 crc kubenswrapper[5117]: I0123 08:57:21.961019 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-etcd-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.143077 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-console-operator\"/\"serving-cert\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.162754 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-image-registry\"/\"cluster-image-registry-operator-dockercfg-ntnd7\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.293444 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-samples-operator\"/\"samples-operator-tls\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.377640 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-config\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.385848 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"config\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.425916 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-dockercfg-kw8fx\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.586380 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-canary\"/\"default-dockercfg-9pgs7\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.687377 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-storage-version-migrator-operator\"/\"kube-storage-version-migrator-operator-dockercfg-2h6bs\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.815290 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-config\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.844677 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"kube-root-ca.crt\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.911536 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"metrics-tls\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.948010 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"ovnkube-script-lib\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.948995 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"openshift-service-ca.crt\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.963277 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-kube-controller-manager-operator\"/\"kube-controller-manager-operator-serving-cert\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.972624 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"openshift-service-ca.crt\"" Jan 23 08:57:22 crc kubenswrapper[5117]: I0123 08:57:22.992952 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-oauth-apiserver\"/\"encryption-config-1\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.055185 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-route-controller-manager\"/\"route-controller-manager-sa-dockercfg-mmcpt\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.146218 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress\"/\"router-metrics-certs-default\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.212719 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-ingress-operator\"/\"ingress-operator-dockercfg-74nwh\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.345848 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-machine-approver\"/\"machine-approver-sa-dockercfg-wzhvk\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.415684 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"openshift-service-ca.crt\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.450473 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"kube-root-ca.crt\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.474875 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"trusted-ca\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.607219 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"mco-proxy-tls\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.640272 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"console-config\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.843109 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-apiserver\"/\"encryption-config-1\"" Jan 23 08:57:23 crc kubenswrapper[5117]: I0123 08:57:23.997066 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-service-ca-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.036801 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-machine-config-operator\"/\"proxy-tls\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.041705 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager\"/\"openshift-global-ca\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.160254 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-apiserver-operator\"/\"kube-apiserver-operator-config\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.399842 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.549696 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"hostpath-provisioner\"/\"kube-root-ca.crt\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.564321 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-cluster-version\"/\"default-dockercfg-hqpm5\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.655490 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns-operator\"/\"dns-operator-dockercfg-wbbsn\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.782087 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-controller-manager-operator-config\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.787930 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-machine-config-operator\"/\"kube-root-ca.crt\"" Jan 23 08:57:24 crc kubenswrapper[5117]: I0123 08:57:24.795170 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"kube-root-ca.crt\"" Jan 23 08:57:25 crc kubenswrapper[5117]: I0123 08:57:25.370079 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ingress\"/\"kube-root-ca.crt\"" Jan 23 08:57:25 crc kubenswrapper[5117]: I0123 08:57:25.558053 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-marketplace\"/\"marketplace-trusted-ca\"" Jan 23 08:57:25 crc kubenswrapper[5117]: I0123 08:57:25.563242 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-image-registry\"/\"image-registry-certificates\"" Jan 23 08:57:25 crc kubenswrapper[5117]: I0123 08:57:25.703828 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:25 crc kubenswrapper[5117]: I0123 08:57:25.710700 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-controller-manager-operator\"/\"openshift-service-ca.crt\"" Jan 23 08:57:25 crc kubenswrapper[5117]: I0123 08:57:25.746470 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-ovn-kubernetes\"/\"kube-root-ca.crt\"" Jan 23 08:57:26 crc kubenswrapper[5117]: I0123 08:57:26.175629 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-service-ca\"/\"signing-key\"" Jan 23 08:57:26 crc kubenswrapper[5117]: I0123 08:57:26.611724 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-etcd-operator\"/\"etcd-operator-dockercfg-4vdnc\"" Jan 23 08:57:26 crc kubenswrapper[5117]: I0123 08:57:26.967746 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-kube-scheduler-operator\"/\"openshift-kube-scheduler-operator-config\"" Jan 23 08:57:27 crc kubenswrapper[5117]: I0123 08:57:27.440574 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-dns\"/\"dns-default-metrics-tls\"" Jan 23 08:57:27 crc kubenswrapper[5117]: I0123 08:57:27.539316 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-apiserver\"/\"openshift-service-ca.crt\"" Jan 23 08:57:27 crc kubenswrapper[5117]: I0123 08:57:27.544731 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-console\"/\"openshift-service-ca.crt\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.112309 5117 reflector.go:430] "Caches populated" type="*v1.Pod" reflector="pkg/kubelet/config/apiserver.go:66" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.116559 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-66458b6674-5dggj"] Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.116617 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-kube-apiserver/kube-apiserver-crc","openshift-authentication/oauth-openshift-57644cf6b9-bhf2k"] Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117080 5117 kubelet.go:3323] "Trying to delete pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117109 5117 mirror_client.go:130] "Deleting a mirror pod" pod="openshift-kube-apiserver/kube-apiserver-crc" podUID="c906ceb9-6e2c-479a-9f89-742dda69a66c" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117199 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" containerName="oauth-openshift" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117217 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" containerName="oauth-openshift" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117243 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" containerName="installer" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117252 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" containerName="installer" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117365 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="4aff08e0-04ca-4004-8505-2a786aec8e92" containerName="installer" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.117381 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" containerName="oauth-openshift" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.149566 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-kube-apiserver/kube-apiserver-crc" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.149611 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.152279 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-error\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.152583 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"openshift-service-ca.crt\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.153497 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-serving-cert\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.154407 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-session\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.154425 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-router-certs\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.154454 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-idp-0-file-data\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.154462 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-service-ca\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.154992 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"oauth-openshift-dockercfg-d2bf2\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.155046 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-login\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.155100 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-user-template-provider-selection\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.155451 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"audit\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.156478 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-cliconfig\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.156849 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"kube-root-ca.crt\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.163795 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-trusted-ca-bundle\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.169244 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-authentication\"/\"v4-0-config-system-ocp-branding-template\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190487 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190557 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190594 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190623 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-error\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190638 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pxh2v\" (UniqueName: \"kubernetes.io/projected/8d41a273-2442-4694-a146-688cf4cefe0e-kube-api-access-pxh2v\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190861 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190880 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d41a273-2442-4694-a146-688cf4cefe0e-audit-dir\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190905 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190927 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190947 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190976 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-audit-policies\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.190997 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-login\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.191014 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-session\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.191049 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.192118 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-kube-apiserver/kube-apiserver-crc" podStartSLOduration=30.192098201 podStartE2EDuration="30.192098201s" podCreationTimestamp="2026-01-23 08:56:58 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:57:28.189323633 +0000 UTC m=+259.945448679" watchObservedRunningTime="2026-01-23 08:57:28.192098201 +0000 UTC m=+259.948223227" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.234357 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-cluster-version\"/\"kube-root-ca.crt\"" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.292664 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.292726 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.292801 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-error\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.292834 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pxh2v\" (UniqueName: \"kubernetes.io/projected/8d41a273-2442-4694-a146-688cf4cefe0e-kube-api-access-pxh2v\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293011 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293051 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d41a273-2442-4694-a146-688cf4cefe0e-audit-dir\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293085 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293121 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293198 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293229 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-audit-policies\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293269 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-login\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293299 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-session\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293347 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293395 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293623 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-cliconfig\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-cliconfig\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.293871 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-dir\" (UniqueName: \"kubernetes.io/host-path/8d41a273-2442-4694-a146-688cf4cefe0e-audit-dir\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.294690 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-service-ca\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-service-ca\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.295070 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-trusted-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-trusted-ca-bundle\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.295514 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"audit-policies\" (UniqueName: \"kubernetes.io/configmap/8d41a273-2442-4694-a146-688cf4cefe0e-audit-policies\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.299177 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-provider-selection\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-provider-selection\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.299180 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-login\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-login\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.299526 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-router-certs\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-router-certs\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.300252 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-idp-0-file-data\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-idp-0-file-data\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.300272 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-ocp-branding-template\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-ocp-branding-template\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.300400 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-user-template-error\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-user-template-error\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.301467 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-serving-cert\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-serving-cert\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.302427 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"v4-0-config-system-session\" (UniqueName: \"kubernetes.io/secret/8d41a273-2442-4694-a146-688cf4cefe0e-v4-0-config-system-session\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.310466 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pxh2v\" (UniqueName: \"kubernetes.io/projected/8d41a273-2442-4694-a146-688cf4cefe0e-kube-api-access-pxh2v\") pod \"oauth-openshift-57644cf6b9-bhf2k\" (UID: \"8d41a273-2442-4694-a146-688cf4cefe0e\") " pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.475482 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.670637 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-authentication/oauth-openshift-57644cf6b9-bhf2k"] Jan 23 08:57:28 crc kubenswrapper[5117]: I0123 08:57:28.795740 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f399bc34-e976-4cd2-90df-40a3d08fb983" path="/var/lib/kubelet/pods/f399bc34-e976-4cd2-90df-40a3d08fb983/volumes" Jan 23 08:57:29 crc kubenswrapper[5117]: I0123 08:57:29.330610 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" event={"ID":"8d41a273-2442-4694-a146-688cf4cefe0e","Type":"ContainerStarted","Data":"88e9c08ac794a2d4c16255fa257663211cdaf13f7be3d65d86bf36c527e4d5b5"} Jan 23 08:57:29 crc kubenswrapper[5117]: I0123 08:57:29.330661 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" event={"ID":"8d41a273-2442-4694-a146-688cf4cefe0e","Type":"ContainerStarted","Data":"71f3a0733630612304033c15626d5c1d5892aae2571ba5c973d4c36726694ea2"} Jan 23 08:57:29 crc kubenswrapper[5117]: I0123 08:57:29.330841 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:29 crc kubenswrapper[5117]: I0123 08:57:29.626262 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" Jan 23 08:57:29 crc kubenswrapper[5117]: I0123 08:57:29.646178 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-authentication/oauth-openshift-57644cf6b9-bhf2k" podStartSLOduration=69.646160792 podStartE2EDuration="1m9.646160792s" podCreationTimestamp="2026-01-23 08:56:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:57:29.356696158 +0000 UTC m=+261.112821184" watchObservedRunningTime="2026-01-23 08:57:29.646160792 +0000 UTC m=+261.402285818" Jan 23 08:57:31 crc kubenswrapper[5117]: I0123 08:57:31.976860 5117 kubelet.go:2547] "SyncLoop REMOVE" source="file" pods=["openshift-kube-apiserver/kube-apiserver-startup-monitor-crc"] Jan 23 08:57:31 crc kubenswrapper[5117]: I0123 08:57:31.977559 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" containerID="cri-o://886268cc908fe0c4b7e2c235665fe12d7f7600d63542a2c6b552dee61e56aeec" gracePeriod=5 Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.370752 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.371660 5117 generic.go:358] "Generic (PLEG): container finished" podID="f7dbc7e1ee9c187a863ef9b473fad27b" containerID="886268cc908fe0c4b7e2c235665fe12d7f7600d63542a2c6b552dee61e56aeec" exitCode=137 Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.544195 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.544266 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.545787 5117 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625677 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625732 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625763 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625777 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625803 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") pod \"f7dbc7e1ee9c187a863ef9b473fad27b\" (UID: \"f7dbc7e1ee9c187a863ef9b473fad27b\") " Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625812 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests" (OuterVolumeSpecName: "manifests") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "manifests". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625872 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log" (OuterVolumeSpecName: "var-log") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625912 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir" (OuterVolumeSpecName: "resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.625934 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock" (OuterVolumeSpecName: "var-lock") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "var-lock". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.626213 5117 reconciler_common.go:299] "Volume detached for volume \"manifests\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-manifests\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.626227 5117 reconciler_common.go:299] "Volume detached for volume \"var-log\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-log\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.626236 5117 reconciler_common.go:299] "Volume detached for volume \"var-lock\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-var-lock\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.626244 5117 reconciler_common.go:299] "Volume detached for volume \"resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.638298 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir" (OuterVolumeSpecName: "pod-resource-dir") pod "f7dbc7e1ee9c187a863ef9b473fad27b" (UID: "f7dbc7e1ee9c187a863ef9b473fad27b"). InnerVolumeSpecName "pod-resource-dir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 08:57:37 crc kubenswrapper[5117]: I0123 08:57:37.727009 5117 reconciler_common.go:299] "Volume detached for volume \"pod-resource-dir\" (UniqueName: \"kubernetes.io/host-path/f7dbc7e1ee9c187a863ef9b473fad27b-pod-resource-dir\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:38 crc kubenswrapper[5117]: I0123 08:57:38.379287 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-apiserver_kube-apiserver-startup-monitor-crc_f7dbc7e1ee9c187a863ef9b473fad27b/startup-monitor/0.log" Jan 23 08:57:38 crc kubenswrapper[5117]: I0123 08:57:38.379683 5117 scope.go:117] "RemoveContainer" containerID="886268cc908fe0c4b7e2c235665fe12d7f7600d63542a2c6b552dee61e56aeec" Jan 23 08:57:38 crc kubenswrapper[5117]: I0123 08:57:38.379705 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" Jan 23 08:57:38 crc kubenswrapper[5117]: I0123 08:57:38.394952 5117 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 23 08:57:38 crc kubenswrapper[5117]: I0123 08:57:38.775787 5117 status_manager.go:895] "Failed to get status for pod" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" pod="openshift-kube-apiserver/kube-apiserver-startup-monitor-crc" err="pods \"kube-apiserver-startup-monitor-crc\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-kube-apiserver\": no relationship found between node 'crc' and this object" Jan 23 08:57:38 crc kubenswrapper[5117]: I0123 08:57:38.778488 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" path="/var/lib/kubelet/pods/f7dbc7e1ee9c187a863ef9b473fad27b/volumes" Jan 23 08:57:41 crc kubenswrapper[5117]: I0123 08:57:41.419108 5117 generic.go:358] "Generic (PLEG): container finished" podID="c818e404-5db9-4c23-893d-1cd602c404aa" containerID="a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9" exitCode=0 Jan 23 08:57:41 crc kubenswrapper[5117]: I0123 08:57:41.419199 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" event={"ID":"c818e404-5db9-4c23-893d-1cd602c404aa","Type":"ContainerDied","Data":"a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9"} Jan 23 08:57:41 crc kubenswrapper[5117]: I0123 08:57:41.420084 5117 scope.go:117] "RemoveContainer" containerID="a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9" Jan 23 08:57:42 crc kubenswrapper[5117]: I0123 08:57:42.427326 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" event={"ID":"c818e404-5db9-4c23-893d-1cd602c404aa","Type":"ContainerStarted","Data":"6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a"} Jan 23 08:57:42 crc kubenswrapper[5117]: I0123 08:57:42.428122 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:57:42 crc kubenswrapper[5117]: I0123 08:57:42.431384 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.063107 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.063174 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.063213 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.063586 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"c1ef916fc9ddc28b4cf6d39c794f56acc3529533f1acf6dc7245d743afdd645b"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.063648 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://c1ef916fc9ddc28b4cf6d39c794f56acc3529533f1acf6dc7245d743afdd645b" gracePeriod=600 Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.447246 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="c1ef916fc9ddc28b4cf6d39c794f56acc3529533f1acf6dc7245d743afdd645b" exitCode=0 Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.447346 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"c1ef916fc9ddc28b4cf6d39c794f56acc3529533f1acf6dc7245d743afdd645b"} Jan 23 08:57:45 crc kubenswrapper[5117]: I0123 08:57:45.447406 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"8a985eb1e9e04c52887c77708e8b5b6b0c0bbd9b01592af6ad6940a52462f607"} Jan 23 08:57:47 crc kubenswrapper[5117]: I0123 08:57:47.498839 5117 ???:1] "http: TLS handshake error from 192.168.126.11:57464: no serving certificate available for the kubelet" Jan 23 08:57:58 crc kubenswrapper[5117]: I0123 08:57:58.965480 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6cjff"] Jan 23 08:57:58 crc kubenswrapper[5117]: I0123 08:57:58.966397 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" podUID="07f1e5bf-d4d4-4e48-8973-9645b408072b" containerName="controller-manager" containerID="cri-o://9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9" gracePeriod=30 Jan 23 08:57:58 crc kubenswrapper[5117]: I0123 08:57:58.982416 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c"] Jan 23 08:57:58 crc kubenswrapper[5117]: I0123 08:57:58.982975 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" podUID="2da5d032-059b-467c-89ec-6d81958200ca" containerName="route-controller-manager" containerID="cri-o://2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51" gracePeriod=30 Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.366674 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391239 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391772 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391784 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391813 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="07f1e5bf-d4d4-4e48-8973-9645b408072b" containerName="controller-manager" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391821 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="07f1e5bf-d4d4-4e48-8973-9645b408072b" containerName="controller-manager" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391909 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="f7dbc7e1ee9c187a863ef9b473fad27b" containerName="startup-monitor" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.391922 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="07f1e5bf-d4d4-4e48-8973-9645b408072b" containerName="controller-manager" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.397379 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.411529 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.419466 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451499 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-proxy-ca-bundles\") pod \"07f1e5bf-d4d4-4e48-8973-9645b408072b\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451672 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f1e5bf-d4d4-4e48-8973-9645b408072b-serving-cert\") pod \"07f1e5bf-d4d4-4e48-8973-9645b408072b\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451702 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2tphm\" (UniqueName: \"kubernetes.io/projected/07f1e5bf-d4d4-4e48-8973-9645b408072b-kube-api-access-2tphm\") pod \"07f1e5bf-d4d4-4e48-8973-9645b408072b\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451776 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-config\") pod \"07f1e5bf-d4d4-4e48-8973-9645b408072b\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451815 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07f1e5bf-d4d4-4e48-8973-9645b408072b-tmp\") pod \"07f1e5bf-d4d4-4e48-8973-9645b408072b\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451835 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-client-ca\") pod \"07f1e5bf-d4d4-4e48-8973-9645b408072b\" (UID: \"07f1e5bf-d4d4-4e48-8973-9645b408072b\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.451967 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-proxy-ca-bundles\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.452019 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-config\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.452037 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47945962-202f-4b58-9d42-a947a0d6c52a-serving-cert\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.452069 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-client-ca\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.452113 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r94nh\" (UniqueName: \"kubernetes.io/projected/47945962-202f-4b58-9d42-a947a0d6c52a-kube-api-access-r94nh\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.452170 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/47945962-202f-4b58-9d42-a947a0d6c52a-tmp\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.453365 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-proxy-ca-bundles" (OuterVolumeSpecName: "proxy-ca-bundles") pod "07f1e5bf-d4d4-4e48-8973-9645b408072b" (UID: "07f1e5bf-d4d4-4e48-8973-9645b408072b"). InnerVolumeSpecName "proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.454258 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/07f1e5bf-d4d4-4e48-8973-9645b408072b-tmp" (OuterVolumeSpecName: "tmp") pod "07f1e5bf-d4d4-4e48-8973-9645b408072b" (UID: "07f1e5bf-d4d4-4e48-8973-9645b408072b"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.454405 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-client-ca" (OuterVolumeSpecName: "client-ca") pod "07f1e5bf-d4d4-4e48-8973-9645b408072b" (UID: "07f1e5bf-d4d4-4e48-8973-9645b408072b"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.454483 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-config" (OuterVolumeSpecName: "config") pod "07f1e5bf-d4d4-4e48-8973-9645b408072b" (UID: "07f1e5bf-d4d4-4e48-8973-9645b408072b"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.464708 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/07f1e5bf-d4d4-4e48-8973-9645b408072b-kube-api-access-2tphm" (OuterVolumeSpecName: "kube-api-access-2tphm") pod "07f1e5bf-d4d4-4e48-8973-9645b408072b" (UID: "07f1e5bf-d4d4-4e48-8973-9645b408072b"). InnerVolumeSpecName "kube-api-access-2tphm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.464901 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/07f1e5bf-d4d4-4e48-8973-9645b408072b-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "07f1e5bf-d4d4-4e48-8973-9645b408072b" (UID: "07f1e5bf-d4d4-4e48-8973-9645b408072b"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.473304 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.473979 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2da5d032-059b-467c-89ec-6d81958200ca" containerName="route-controller-manager" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.473996 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="2da5d032-059b-467c-89ec-6d81958200ca" containerName="route-controller-manager" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.474097 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="2da5d032-059b-467c-89ec-6d81958200ca" containerName="route-controller-manager" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.481823 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.481950 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.524880 5117 generic.go:358] "Generic (PLEG): container finished" podID="07f1e5bf-d4d4-4e48-8973-9645b408072b" containerID="9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9" exitCode=0 Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.524980 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" event={"ID":"07f1e5bf-d4d4-4e48-8973-9645b408072b","Type":"ContainerDied","Data":"9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9"} Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.525054 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" event={"ID":"07f1e5bf-d4d4-4e48-8973-9645b408072b","Type":"ContainerDied","Data":"074a205a9ecdef0defc2ecb39f771749c8db5debaa8e4923c16635f26d68bb1b"} Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.525077 5117 scope.go:117] "RemoveContainer" containerID="9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.525427 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-65b6cccf98-6cjff" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.526394 5117 generic.go:358] "Generic (PLEG): container finished" podID="2da5d032-059b-467c-89ec-6d81958200ca" containerID="2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51" exitCode=0 Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.526499 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" event={"ID":"2da5d032-059b-467c-89ec-6d81958200ca","Type":"ContainerDied","Data":"2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51"} Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.526520 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" event={"ID":"2da5d032-059b-467c-89ec-6d81958200ca","Type":"ContainerDied","Data":"a31088b50a8ecac73b8dcb03c630bc60f4a3be6a818fb092c8c023f300b33162"} Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.526615 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.540711 5117 scope.go:117] "RemoveContainer" containerID="9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9" Jan 23 08:57:59 crc kubenswrapper[5117]: E0123 08:57:59.541073 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9\": container with ID starting with 9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9 not found: ID does not exist" containerID="9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.541101 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9"} err="failed to get container status \"9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9\": rpc error: code = NotFound desc = could not find container \"9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9\": container with ID starting with 9d6d9ba901afa64b2763fd4d89e185c07caa5f6dea0dedb08f246e92e8931ac9 not found: ID does not exist" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.541119 5117 scope.go:117] "RemoveContainer" containerID="2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.552523 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6cjff"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.552834 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2da5d032-059b-467c-89ec-6d81958200ca-tmp\") pod \"2da5d032-059b-467c-89ec-6d81958200ca\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.552891 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-config\") pod \"2da5d032-059b-467c-89ec-6d81958200ca\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.552909 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-client-ca\") pod \"2da5d032-059b-467c-89ec-6d81958200ca\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.552971 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2da5d032-059b-467c-89ec-6d81958200ca-serving-cert\") pod \"2da5d032-059b-467c-89ec-6d81958200ca\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553014 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pvwmj\" (UniqueName: \"kubernetes.io/projected/2da5d032-059b-467c-89ec-6d81958200ca-kube-api-access-pvwmj\") pod \"2da5d032-059b-467c-89ec-6d81958200ca\" (UID: \"2da5d032-059b-467c-89ec-6d81958200ca\") " Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553192 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-client-ca\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553245 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r94nh\" (UniqueName: \"kubernetes.io/projected/47945962-202f-4b58-9d42-a947a0d6c52a-kube-api-access-r94nh\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553276 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a0e150-2fac-4995-8080-9b2061039b80-serving-cert\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553305 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-config\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553325 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kfnwn\" (UniqueName: \"kubernetes.io/projected/37a0e150-2fac-4995-8080-9b2061039b80-kube-api-access-kfnwn\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553348 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/47945962-202f-4b58-9d42-a947a0d6c52a-tmp\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553367 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a0e150-2fac-4995-8080-9b2061039b80-tmp\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553431 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-proxy-ca-bundles\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553466 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-config\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553488 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47945962-202f-4b58-9d42-a947a0d6c52a-serving-cert\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553522 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-client-ca\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553572 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553584 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/07f1e5bf-d4d4-4e48-8973-9645b408072b-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553593 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553602 5117 reconciler_common.go:299] "Volume detached for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/07f1e5bf-d4d4-4e48-8973-9645b408072b-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553613 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/07f1e5bf-d4d4-4e48-8973-9645b408072b-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.553623 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2tphm\" (UniqueName: \"kubernetes.io/projected/07f1e5bf-d4d4-4e48-8973-9645b408072b-kube-api-access-2tphm\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.554442 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-client-ca\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.554693 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2da5d032-059b-467c-89ec-6d81958200ca-tmp" (OuterVolumeSpecName: "tmp") pod "2da5d032-059b-467c-89ec-6d81958200ca" (UID: "2da5d032-059b-467c-89ec-6d81958200ca"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.554849 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/47945962-202f-4b58-9d42-a947a0d6c52a-tmp\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.555481 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-client-ca" (OuterVolumeSpecName: "client-ca") pod "2da5d032-059b-467c-89ec-6d81958200ca" (UID: "2da5d032-059b-467c-89ec-6d81958200ca"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.555528 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-config" (OuterVolumeSpecName: "config") pod "2da5d032-059b-467c-89ec-6d81958200ca" (UID: "2da5d032-059b-467c-89ec-6d81958200ca"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.555805 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-config\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.555851 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/47945962-202f-4b58-9d42-a947a0d6c52a-proxy-ca-bundles\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.557594 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-controller-manager/controller-manager-65b6cccf98-6cjff"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.557989 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/2da5d032-059b-467c-89ec-6d81958200ca-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "2da5d032-059b-467c-89ec-6d81958200ca" (UID: "2da5d032-059b-467c-89ec-6d81958200ca"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.558754 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/47945962-202f-4b58-9d42-a947a0d6c52a-serving-cert\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.558990 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2da5d032-059b-467c-89ec-6d81958200ca-kube-api-access-pvwmj" (OuterVolumeSpecName: "kube-api-access-pvwmj") pod "2da5d032-059b-467c-89ec-6d81958200ca" (UID: "2da5d032-059b-467c-89ec-6d81958200ca"). InnerVolumeSpecName "kube-api-access-pvwmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.560029 5117 scope.go:117] "RemoveContainer" containerID="2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51" Jan 23 08:57:59 crc kubenswrapper[5117]: E0123 08:57:59.560401 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51\": container with ID starting with 2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51 not found: ID does not exist" containerID="2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.560424 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51"} err="failed to get container status \"2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51\": rpc error: code = NotFound desc = could not find container \"2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51\": container with ID starting with 2486b521cb2058da5850642e884c1f7c5e8214766a90a12ecbaf9f37d5659c51 not found: ID does not exist" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.570396 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r94nh\" (UniqueName: \"kubernetes.io/projected/47945962-202f-4b58-9d42-a947a0d6c52a-kube-api-access-r94nh\") pod \"controller-manager-77d68b5fb4-d9jxh\" (UID: \"47945962-202f-4b58-9d42-a947a0d6c52a\") " pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655103 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-config\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655188 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kfnwn\" (UniqueName: \"kubernetes.io/projected/37a0e150-2fac-4995-8080-9b2061039b80-kube-api-access-kfnwn\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655211 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a0e150-2fac-4995-8080-9b2061039b80-tmp\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655282 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-client-ca\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655315 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a0e150-2fac-4995-8080-9b2061039b80-serving-cert\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655372 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655385 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/2da5d032-059b-467c-89ec-6d81958200ca-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655393 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/2da5d032-059b-467c-89ec-6d81958200ca-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655402 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pvwmj\" (UniqueName: \"kubernetes.io/projected/2da5d032-059b-467c-89ec-6d81958200ca-kube-api-access-pvwmj\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655411 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/2da5d032-059b-467c-89ec-6d81958200ca-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.655708 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a0e150-2fac-4995-8080-9b2061039b80-tmp\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.656164 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-client-ca\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.656698 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-config\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.660297 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a0e150-2fac-4995-8080-9b2061039b80-serving-cert\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.671987 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kfnwn\" (UniqueName: \"kubernetes.io/projected/37a0e150-2fac-4995-8080-9b2061039b80-kube-api-access-kfnwn\") pod \"route-controller-manager-6b596cb98b-fbvll\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.728965 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.794529 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.911948 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.919651 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-776cdc94d6-brg6c"] Jan 23 08:57:59 crc kubenswrapper[5117]: I0123 08:57:59.958860 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh"] Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.025373 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll"] Jan 23 08:58:00 crc kubenswrapper[5117]: W0123 08:58:00.037920 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod37a0e150_2fac_4995_8080_9b2061039b80.slice/crio-52f6c0b92afbf07f69271bf8f2f817a2022eb3237e2d75a666802b93ec1933a1 WatchSource:0}: Error finding container 52f6c0b92afbf07f69271bf8f2f817a2022eb3237e2d75a666802b93ec1933a1: Status 404 returned error can't find the container with id 52f6c0b92afbf07f69271bf8f2f817a2022eb3237e2d75a666802b93ec1933a1 Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.535002 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" event={"ID":"37a0e150-2fac-4995-8080-9b2061039b80","Type":"ContainerStarted","Data":"62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2"} Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.535449 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" event={"ID":"37a0e150-2fac-4995-8080-9b2061039b80","Type":"ContainerStarted","Data":"52f6c0b92afbf07f69271bf8f2f817a2022eb3237e2d75a666802b93ec1933a1"} Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.535496 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.539657 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" event={"ID":"47945962-202f-4b58-9d42-a947a0d6c52a","Type":"ContainerStarted","Data":"3f6b5a909532e16aacca4a703459000cbd9c601719b5a0e19d908e1b116124d0"} Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.539695 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" event={"ID":"47945962-202f-4b58-9d42-a947a0d6c52a","Type":"ContainerStarted","Data":"d3c312798e7ad456a1983cee180e060f1f30ff3cd6f27881d7de6c973c9d147a"} Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.540758 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.559255 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" podStartSLOduration=1.559231544 podStartE2EDuration="1.559231544s" podCreationTimestamp="2026-01-23 08:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:58:00.555438047 +0000 UTC m=+292.311563083" watchObservedRunningTime="2026-01-23 08:58:00.559231544 +0000 UTC m=+292.315356570" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.568600 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.598315 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-controller-manager/controller-manager-77d68b5fb4-d9jxh" podStartSLOduration=1.59829577 podStartE2EDuration="1.59829577s" podCreationTimestamp="2026-01-23 08:57:59 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:58:00.580579923 +0000 UTC m=+292.336704949" watchObservedRunningTime="2026-01-23 08:58:00.59829577 +0000 UTC m=+292.354420796" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.668003 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.778263 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="07f1e5bf-d4d4-4e48-8973-9645b408072b" path="/var/lib/kubelet/pods/07f1e5bf-d4d4-4e48-8973-9645b408072b/volumes" Jan 23 08:58:00 crc kubenswrapper[5117]: I0123 08:58:00.778978 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2da5d032-059b-467c-89ec-6d81958200ca" path="/var/lib/kubelet/pods/2da5d032-059b-467c-89ec-6d81958200ca/volumes" Jan 23 08:58:09 crc kubenswrapper[5117]: I0123 08:58:09.014878 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 08:58:09 crc kubenswrapper[5117]: I0123 08:58:09.018752 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.010799 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll"] Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.011595 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" podUID="37a0e150-2fac-4995-8080-9b2061039b80" containerName="route-controller-manager" containerID="cri-o://62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2" gracePeriod=30 Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.408637 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.439349 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f"] Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.443817 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="37a0e150-2fac-4995-8080-9b2061039b80" containerName="route-controller-manager" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.443865 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="37a0e150-2fac-4995-8080-9b2061039b80" containerName="route-controller-manager" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.444102 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="37a0e150-2fac-4995-8080-9b2061039b80" containerName="route-controller-manager" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.452050 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f"] Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.452224 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.476323 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kfnwn\" (UniqueName: \"kubernetes.io/projected/37a0e150-2fac-4995-8080-9b2061039b80-kube-api-access-kfnwn\") pod \"37a0e150-2fac-4995-8080-9b2061039b80\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.476424 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-config\") pod \"37a0e150-2fac-4995-8080-9b2061039b80\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.476545 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a0e150-2fac-4995-8080-9b2061039b80-serving-cert\") pod \"37a0e150-2fac-4995-8080-9b2061039b80\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.476593 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a0e150-2fac-4995-8080-9b2061039b80-tmp\") pod \"37a0e150-2fac-4995-8080-9b2061039b80\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.476672 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-client-ca\") pod \"37a0e150-2fac-4995-8080-9b2061039b80\" (UID: \"37a0e150-2fac-4995-8080-9b2061039b80\") " Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.477610 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-client-ca" (OuterVolumeSpecName: "client-ca") pod "37a0e150-2fac-4995-8080-9b2061039b80" (UID: "37a0e150-2fac-4995-8080-9b2061039b80"). InnerVolumeSpecName "client-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.477625 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-config" (OuterVolumeSpecName: "config") pod "37a0e150-2fac-4995-8080-9b2061039b80" (UID: "37a0e150-2fac-4995-8080-9b2061039b80"). InnerVolumeSpecName "config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.479543 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/37a0e150-2fac-4995-8080-9b2061039b80-tmp" (OuterVolumeSpecName: "tmp") pod "37a0e150-2fac-4995-8080-9b2061039b80" (UID: "37a0e150-2fac-4995-8080-9b2061039b80"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.484362 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/37a0e150-2fac-4995-8080-9b2061039b80-kube-api-access-kfnwn" (OuterVolumeSpecName: "kube-api-access-kfnwn") pod "37a0e150-2fac-4995-8080-9b2061039b80" (UID: "37a0e150-2fac-4995-8080-9b2061039b80"). InnerVolumeSpecName "kube-api-access-kfnwn". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.484915 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/37a0e150-2fac-4995-8080-9b2061039b80-serving-cert" (OuterVolumeSpecName: "serving-cert") pod "37a0e150-2fac-4995-8080-9b2061039b80" (UID: "37a0e150-2fac-4995-8080-9b2061039b80"). InnerVolumeSpecName "serving-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.500807 5117 dynamic_cafile_content.go:123] "Loaded a new CA Bundle and Verifier" name="client-ca-bundle::/etc/kubernetes/kubelet-ca.crt" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578008 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45257c3-4158-4be0-8625-8c2501344ab1-serving-cert\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578061 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zs44w\" (UniqueName: \"kubernetes.io/projected/e45257c3-4158-4be0-8625-8c2501344ab1-kube-api-access-zs44w\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578154 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e45257c3-4158-4be0-8625-8c2501344ab1-tmp\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578227 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45257c3-4158-4be0-8625-8c2501344ab1-config\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578277 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45257c3-4158-4be0-8625-8c2501344ab1-client-ca\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578341 5117 reconciler_common.go:299] "Volume detached for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/37a0e150-2fac-4995-8080-9b2061039b80-serving-cert\") on node \"crc\" DevicePath \"\"" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578361 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/37a0e150-2fac-4995-8080-9b2061039b80-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578373 5117 reconciler_common.go:299] "Volume detached for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-client-ca\") on node \"crc\" DevicePath \"\"" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578384 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kfnwn\" (UniqueName: \"kubernetes.io/projected/37a0e150-2fac-4995-8080-9b2061039b80-kube-api-access-kfnwn\") on node \"crc\" DevicePath \"\"" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.578396 5117 reconciler_common.go:299] "Volume detached for volume \"config\" (UniqueName: \"kubernetes.io/configmap/37a0e150-2fac-4995-8080-9b2061039b80-config\") on node \"crc\" DevicePath \"\"" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.679953 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e45257c3-4158-4be0-8625-8c2501344ab1-tmp\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.680033 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45257c3-4158-4be0-8625-8c2501344ab1-config\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.680102 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45257c3-4158-4be0-8625-8c2501344ab1-client-ca\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.680711 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/e45257c3-4158-4be0-8625-8c2501344ab1-tmp\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.681234 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"client-ca\" (UniqueName: \"kubernetes.io/configmap/e45257c3-4158-4be0-8625-8c2501344ab1-client-ca\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.681362 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45257c3-4158-4be0-8625-8c2501344ab1-serving-cert\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.681445 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/configmap/e45257c3-4158-4be0-8625-8c2501344ab1-config\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.681478 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zs44w\" (UniqueName: \"kubernetes.io/projected/e45257c3-4158-4be0-8625-8c2501344ab1-kube-api-access-zs44w\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.688722 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"serving-cert\" (UniqueName: \"kubernetes.io/secret/e45257c3-4158-4be0-8625-8c2501344ab1-serving-cert\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.709523 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zs44w\" (UniqueName: \"kubernetes.io/projected/e45257c3-4158-4be0-8625-8c2501344ab1-kube-api-access-zs44w\") pod \"route-controller-manager-8d4c8b555-l955f\" (UID: \"e45257c3-4158-4be0-8625-8c2501344ab1\") " pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:39 crc kubenswrapper[5117]: I0123 08:58:39.774312 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.034255 5117 generic.go:358] "Generic (PLEG): container finished" podID="37a0e150-2fac-4995-8080-9b2061039b80" containerID="62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2" exitCode=0 Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.034373 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.034613 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" event={"ID":"37a0e150-2fac-4995-8080-9b2061039b80","Type":"ContainerDied","Data":"62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2"} Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.035070 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll" event={"ID":"37a0e150-2fac-4995-8080-9b2061039b80","Type":"ContainerDied","Data":"52f6c0b92afbf07f69271bf8f2f817a2022eb3237e2d75a666802b93ec1933a1"} Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.035116 5117 scope.go:117] "RemoveContainer" containerID="62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2" Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.060968 5117 scope.go:117] "RemoveContainer" containerID="62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2" Jan 23 08:58:40 crc kubenswrapper[5117]: E0123 08:58:40.061545 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2\": container with ID starting with 62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2 not found: ID does not exist" containerID="62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2" Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.061578 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2"} err="failed to get container status \"62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2\": rpc error: code = NotFound desc = could not find container \"62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2\": container with ID starting with 62ae48cc0a089b25c97f519c3c7b9dd4f0a54ec582179fee3f8ca7d23bd6a4c2 not found: ID does not exist" Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.071589 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll"] Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.076930 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-6b596cb98b-fbvll"] Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.202234 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f"] Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.210246 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 08:58:40 crc kubenswrapper[5117]: I0123 08:58:40.779061 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="37a0e150-2fac-4995-8080-9b2061039b80" path="/var/lib/kubelet/pods/37a0e150-2fac-4995-8080-9b2061039b80/volumes" Jan 23 08:58:41 crc kubenswrapper[5117]: I0123 08:58:41.044057 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" event={"ID":"e45257c3-4158-4be0-8625-8c2501344ab1","Type":"ContainerStarted","Data":"13738a54a7dfa75608c5236bb0ff4d2255bb8357f98cefa0def81e0d645074b0"} Jan 23 08:58:41 crc kubenswrapper[5117]: I0123 08:58:41.044109 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" event={"ID":"e45257c3-4158-4be0-8625-8c2501344ab1","Type":"ContainerStarted","Data":"f643ee221fe7ec3915277a1c4a01a73d2d559bef0fb56424fc02cb106db92ec4"} Jan 23 08:58:41 crc kubenswrapper[5117]: I0123 08:58:41.044594 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:58:41 crc kubenswrapper[5117]: I0123 08:58:41.069484 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" podStartSLOduration=2.069459301 podStartE2EDuration="2.069459301s" podCreationTimestamp="2026-01-23 08:58:39 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 08:58:41.060902228 +0000 UTC m=+332.817027294" watchObservedRunningTime="2026-01-23 08:58:41.069459301 +0000 UTC m=+332.825584367" Jan 23 08:58:41 crc kubenswrapper[5117]: I0123 08:58:41.176547 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-route-controller-manager/route-controller-manager-8d4c8b555-l955f" Jan 23 08:59:45 crc kubenswrapper[5117]: I0123 08:59:45.063872 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 08:59:45 crc kubenswrapper[5117]: I0123 08:59:45.064581 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.181669 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt"] Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.193217 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt"] Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.193519 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.196241 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.196260 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.318007 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-config-volume\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.318423 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-secret-volume\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.318610 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ttb9k\" (UniqueName: \"kubernetes.io/projected/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-kube-api-access-ttb9k\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.419363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-secret-volume\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.419414 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ttb9k\" (UniqueName: \"kubernetes.io/projected/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-kube-api-access-ttb9k\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.419451 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-config-volume\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.420462 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-config-volume\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.425889 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-secret-volume\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.437300 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ttb9k\" (UniqueName: \"kubernetes.io/projected/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-kube-api-access-ttb9k\") pod \"collect-profiles-29485980-rxhgt\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.512416 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:00 crc kubenswrapper[5117]: I0123 09:00:00.925958 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt"] Jan 23 09:00:00 crc kubenswrapper[5117]: W0123 09:00:00.935160 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9bb1e27e_4110_45c3_a59d_5b32319dd1ba.slice/crio-e207369d2185cf0e98dad7292144d288a8999cc0bac317ceca01860c39f2af01 WatchSource:0}: Error finding container e207369d2185cf0e98dad7292144d288a8999cc0bac317ceca01860c39f2af01: Status 404 returned error can't find the container with id e207369d2185cf0e98dad7292144d288a8999cc0bac317ceca01860c39f2af01 Jan 23 09:00:01 crc kubenswrapper[5117]: I0123 09:00:01.576958 5117 generic.go:358] "Generic (PLEG): container finished" podID="9bb1e27e-4110-45c3-a59d-5b32319dd1ba" containerID="56745a92ffdf52c524bda7bcbd277dea941d02cddcc0780e70bcc69a8b86c919" exitCode=0 Jan 23 09:00:01 crc kubenswrapper[5117]: I0123 09:00:01.577164 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" event={"ID":"9bb1e27e-4110-45c3-a59d-5b32319dd1ba","Type":"ContainerDied","Data":"56745a92ffdf52c524bda7bcbd277dea941d02cddcc0780e70bcc69a8b86c919"} Jan 23 09:00:01 crc kubenswrapper[5117]: I0123 09:00:01.577329 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" event={"ID":"9bb1e27e-4110-45c3-a59d-5b32319dd1ba","Type":"ContainerStarted","Data":"e207369d2185cf0e98dad7292144d288a8999cc0bac317ceca01860c39f2af01"} Jan 23 09:00:02 crc kubenswrapper[5117]: I0123 09:00:02.930011 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.071831 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-secret-volume\") pod \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.071887 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ttb9k\" (UniqueName: \"kubernetes.io/projected/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-kube-api-access-ttb9k\") pod \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.071953 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-config-volume\") pod \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\" (UID: \"9bb1e27e-4110-45c3-a59d-5b32319dd1ba\") " Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.072958 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-config-volume" (OuterVolumeSpecName: "config-volume") pod "9bb1e27e-4110-45c3-a59d-5b32319dd1ba" (UID: "9bb1e27e-4110-45c3-a59d-5b32319dd1ba"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.078596 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-kube-api-access-ttb9k" (OuterVolumeSpecName: "kube-api-access-ttb9k") pod "9bb1e27e-4110-45c3-a59d-5b32319dd1ba" (UID: "9bb1e27e-4110-45c3-a59d-5b32319dd1ba"). InnerVolumeSpecName "kube-api-access-ttb9k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.078610 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9bb1e27e-4110-45c3-a59d-5b32319dd1ba" (UID: "9bb1e27e-4110-45c3-a59d-5b32319dd1ba"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.173386 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.173725 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ttb9k\" (UniqueName: \"kubernetes.io/projected/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-kube-api-access-ttb9k\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.173734 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9bb1e27e-4110-45c3-a59d-5b32319dd1ba-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.590705 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.590720 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485980-rxhgt" event={"ID":"9bb1e27e-4110-45c3-a59d-5b32319dd1ba","Type":"ContainerDied","Data":"e207369d2185cf0e98dad7292144d288a8999cc0bac317ceca01860c39f2af01"} Jan 23 09:00:03 crc kubenswrapper[5117]: I0123 09:00:03.590790 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="e207369d2185cf0e98dad7292144d288a8999cc0bac317ceca01860c39f2af01" Jan 23 09:00:15 crc kubenswrapper[5117]: I0123 09:00:15.062932 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:00:15 crc kubenswrapper[5117]: I0123 09:00:15.063754 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.123052 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cgm87"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.123886 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-cgm87" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="registry-server" containerID="cri-o://6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e" gracePeriod=30 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.140652 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7qf4s"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.141159 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-7qf4s" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="registry-server" containerID="cri-o://b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7" gracePeriod=30 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.144717 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t67z9"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.145032 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" containerID="cri-o://6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a" gracePeriod=30 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.147747 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2wfd"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.148153 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-v2wfd" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="registry-server" containerID="cri-o://00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0" gracePeriod=30 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.150879 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bfxhb"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.152076 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9bb1e27e-4110-45c3-a59d-5b32319dd1ba" containerName="collect-profiles" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.152112 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9bb1e27e-4110-45c3-a59d-5b32319dd1ba" containerName="collect-profiles" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.152241 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="9bb1e27e-4110-45c3-a59d-5b32319dd1ba" containerName="collect-profiles" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.157082 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmlw5"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.157545 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.158284 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-hmlw5" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="registry-server" containerID="cri-o://2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803" gracePeriod=30 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.162791 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bfxhb"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.219333 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50c35dac-9534-4fed-a34f-d2cabebef0e6-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.219388 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50c35dac-9534-4fed-a34f-d2cabebef0e6-tmp\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.219485 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50c35dac-9534-4fed-a34f-d2cabebef0e6-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.219578 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s8q24\" (UniqueName: \"kubernetes.io/projected/50c35dac-9534-4fed-a34f-d2cabebef0e6-kube-api-access-s8q24\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.320854 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s8q24\" (UniqueName: \"kubernetes.io/projected/50c35dac-9534-4fed-a34f-d2cabebef0e6-kube-api-access-s8q24\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.321259 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50c35dac-9534-4fed-a34f-d2cabebef0e6-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.321300 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50c35dac-9534-4fed-a34f-d2cabebef0e6-tmp\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.321355 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50c35dac-9534-4fed-a34f-d2cabebef0e6-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.322818 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/50c35dac-9534-4fed-a34f-d2cabebef0e6-marketplace-trusted-ca\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.322920 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/50c35dac-9534-4fed-a34f-d2cabebef0e6-tmp\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.328416 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/50c35dac-9534-4fed-a34f-d2cabebef0e6-marketplace-operator-metrics\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.341861 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s8q24\" (UniqueName: \"kubernetes.io/projected/50c35dac-9534-4fed-a34f-d2cabebef0e6-kube-api-access-s8q24\") pod \"marketplace-operator-547dbd544d-bfxhb\" (UID: \"50c35dac-9534-4fed-a34f-d2cabebef0e6\") " pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.526532 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.529798 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.561746 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.580842 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.585380 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.605885 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624001 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ndkc4\" (UniqueName: \"kubernetes.io/projected/c818e404-5db9-4c23-893d-1cd602c404aa-kube-api-access-ndkc4\") pod \"c818e404-5db9-4c23-893d-1cd602c404aa\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624041 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-utilities\") pod \"aace2d11-7f7c-464c-b258-c61edb938e83\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624066 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-2jrvk\" (UniqueName: \"kubernetes.io/projected/aace2d11-7f7c-464c-b258-c61edb938e83-kube-api-access-2jrvk\") pod \"aace2d11-7f7c-464c-b258-c61edb938e83\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624089 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-catalog-content\") pod \"32b82517-9793-44f5-bc31-05ba0d27c553\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624116 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-catalog-content\") pod \"aace2d11-7f7c-464c-b258-c61edb938e83\" (UID: \"aace2d11-7f7c-464c-b258-c61edb938e83\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624216 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-utilities\") pod \"32b82517-9793-44f5-bc31-05ba0d27c553\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624258 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-catalog-content\") pod \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624275 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-utilities\") pod \"861a8fa4-2b12-475a-819b-74238f4d1a60\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624314 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rngk6\" (UniqueName: \"kubernetes.io/projected/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-kube-api-access-rngk6\") pod \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624337 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c818e404-5db9-4c23-893d-1cd602c404aa-tmp\") pod \"c818e404-5db9-4c23-893d-1cd602c404aa\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624399 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-sttx4\" (UniqueName: \"kubernetes.io/projected/32b82517-9793-44f5-bc31-05ba0d27c553-kube-api-access-sttx4\") pod \"32b82517-9793-44f5-bc31-05ba0d27c553\" (UID: \"32b82517-9793-44f5-bc31-05ba0d27c553\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624446 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-catalog-content\") pod \"861a8fa4-2b12-475a-819b-74238f4d1a60\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624475 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-8dxjj\" (UniqueName: \"kubernetes.io/projected/861a8fa4-2b12-475a-819b-74238f4d1a60-kube-api-access-8dxjj\") pod \"861a8fa4-2b12-475a-819b-74238f4d1a60\" (UID: \"861a8fa4-2b12-475a-819b-74238f4d1a60\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624523 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-trusted-ca\") pod \"c818e404-5db9-4c23-893d-1cd602c404aa\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624575 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-operator-metrics\") pod \"c818e404-5db9-4c23-893d-1cd602c404aa\" (UID: \"c818e404-5db9-4c23-893d-1cd602c404aa\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.624595 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-utilities\") pod \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\" (UID: \"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a\") " Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.625469 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-utilities" (OuterVolumeSpecName: "utilities") pod "32b82517-9793-44f5-bc31-05ba0d27c553" (UID: "32b82517-9793-44f5-bc31-05ba0d27c553"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.628569 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-trusted-ca" (OuterVolumeSpecName: "marketplace-trusted-ca") pod "c818e404-5db9-4c23-893d-1cd602c404aa" (UID: "c818e404-5db9-4c23-893d-1cd602c404aa"). InnerVolumeSpecName "marketplace-trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.630895 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/c818e404-5db9-4c23-893d-1cd602c404aa-kube-api-access-ndkc4" (OuterVolumeSpecName: "kube-api-access-ndkc4") pod "c818e404-5db9-4c23-893d-1cd602c404aa" (UID: "c818e404-5db9-4c23-893d-1cd602c404aa"). InnerVolumeSpecName "kube-api-access-ndkc4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.631720 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-kube-api-access-rngk6" (OuterVolumeSpecName: "kube-api-access-rngk6") pod "5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" (UID: "5d31bbe8-9012-4c1b-8f77-3c795f1eef9a"). InnerVolumeSpecName "kube-api-access-rngk6". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.632097 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-utilities" (OuterVolumeSpecName: "utilities") pod "aace2d11-7f7c-464c-b258-c61edb938e83" (UID: "aace2d11-7f7c-464c-b258-c61edb938e83"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.632249 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/c818e404-5db9-4c23-893d-1cd602c404aa-tmp" (OuterVolumeSpecName: "tmp") pod "c818e404-5db9-4c23-893d-1cd602c404aa" (UID: "c818e404-5db9-4c23-893d-1cd602c404aa"). InnerVolumeSpecName "tmp". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.632739 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-utilities" (OuterVolumeSpecName: "utilities") pod "861a8fa4-2b12-475a-819b-74238f4d1a60" (UID: "861a8fa4-2b12-475a-819b-74238f4d1a60"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.635033 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-utilities" (OuterVolumeSpecName: "utilities") pod "5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" (UID: "5d31bbe8-9012-4c1b-8f77-3c795f1eef9a"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.635634 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/32b82517-9793-44f5-bc31-05ba0d27c553-kube-api-access-sttx4" (OuterVolumeSpecName: "kube-api-access-sttx4") pod "32b82517-9793-44f5-bc31-05ba0d27c553" (UID: "32b82517-9793-44f5-bc31-05ba0d27c553"). InnerVolumeSpecName "kube-api-access-sttx4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639108 5117 reconciler_common.go:299] "Volume detached for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/c818e404-5db9-4c23-893d-1cd602c404aa-tmp\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639168 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-sttx4\" (UniqueName: \"kubernetes.io/projected/32b82517-9793-44f5-bc31-05ba0d27c553-kube-api-access-sttx4\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639188 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-trusted-ca\" (UniqueName: \"kubernetes.io/configmap/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639202 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639216 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ndkc4\" (UniqueName: \"kubernetes.io/projected/c818e404-5db9-4c23-893d-1cd602c404aa-kube-api-access-ndkc4\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639229 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639242 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639261 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.639275 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rngk6\" (UniqueName: \"kubernetes.io/projected/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-kube-api-access-rngk6\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.640516 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aace2d11-7f7c-464c-b258-c61edb938e83-kube-api-access-2jrvk" (OuterVolumeSpecName: "kube-api-access-2jrvk") pod "aace2d11-7f7c-464c-b258-c61edb938e83" (UID: "aace2d11-7f7c-464c-b258-c61edb938e83"). InnerVolumeSpecName "kube-api-access-2jrvk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.644207 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/861a8fa4-2b12-475a-819b-74238f4d1a60-kube-api-access-8dxjj" (OuterVolumeSpecName: "kube-api-access-8dxjj") pod "861a8fa4-2b12-475a-819b-74238f4d1a60" (UID: "861a8fa4-2b12-475a-819b-74238f4d1a60"). InnerVolumeSpecName "kube-api-access-8dxjj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.659077 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-operator-metrics" (OuterVolumeSpecName: "marketplace-operator-metrics") pod "c818e404-5db9-4c23-893d-1cd602c404aa" (UID: "c818e404-5db9-4c23-893d-1cd602c404aa"). InnerVolumeSpecName "marketplace-operator-metrics". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.671635 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "861a8fa4-2b12-475a-819b-74238f4d1a60" (UID: "861a8fa4-2b12-475a-819b-74238f4d1a60"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.694791 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "aace2d11-7f7c-464c-b258-c61edb938e83" (UID: "aace2d11-7f7c-464c-b258-c61edb938e83"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.716735 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "32b82517-9793-44f5-bc31-05ba0d27c553" (UID: "32b82517-9793-44f5-bc31-05ba0d27c553"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.742616 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/32b82517-9793-44f5-bc31-05ba0d27c553-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.742644 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/aace2d11-7f7c-464c-b258-c61edb938e83-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.742653 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/861a8fa4-2b12-475a-819b-74238f4d1a60-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.742662 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-8dxjj\" (UniqueName: \"kubernetes.io/projected/861a8fa4-2b12-475a-819b-74238f4d1a60-kube-api-access-8dxjj\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.742673 5117 reconciler_common.go:299] "Volume detached for volume \"marketplace-operator-metrics\" (UniqueName: \"kubernetes.io/secret/c818e404-5db9-4c23-893d-1cd602c404aa-marketplace-operator-metrics\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.742681 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-2jrvk\" (UniqueName: \"kubernetes.io/projected/aace2d11-7f7c-464c-b258-c61edb938e83-kube-api-access-2jrvk\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.751358 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" (UID: "5d31bbe8-9012-4c1b-8f77-3c795f1eef9a"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.766595 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-bfxhb"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.839506 5117 generic.go:358] "Generic (PLEG): container finished" podID="32b82517-9793-44f5-bc31-05ba0d27c553" containerID="b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7" exitCode=0 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.839562 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qf4s" event={"ID":"32b82517-9793-44f5-bc31-05ba0d27c553","Type":"ContainerDied","Data":"b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.839605 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-7qf4s" event={"ID":"32b82517-9793-44f5-bc31-05ba0d27c553","Type":"ContainerDied","Data":"a7573f271336096a2d1b9c11c0632ed3fffed5ed7a23f5a95db9d32c29407bc4"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.839631 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-7qf4s" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.839640 5117 scope.go:117] "RemoveContainer" containerID="b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.842739 5117 generic.go:358] "Generic (PLEG): container finished" podID="aace2d11-7f7c-464c-b258-c61edb938e83" containerID="6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e" exitCode=0 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.842871 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgm87" event={"ID":"aace2d11-7f7c-464c-b258-c61edb938e83","Type":"ContainerDied","Data":"6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.842899 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-cgm87" event={"ID":"aace2d11-7f7c-464c-b258-c61edb938e83","Type":"ContainerDied","Data":"bf821ca60582250467c9b8bfd4667a7d3ae2e366a44c13543764cfd92d036c9a"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.842991 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-cgm87" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.843772 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.847440 5117 generic.go:358] "Generic (PLEG): container finished" podID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerID="00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0" exitCode=0 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.847469 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2wfd" event={"ID":"861a8fa4-2b12-475a-819b-74238f4d1a60","Type":"ContainerDied","Data":"00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.847518 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-v2wfd" event={"ID":"861a8fa4-2b12-475a-819b-74238f4d1a60","Type":"ContainerDied","Data":"2a592f32f8f19c63156dd303d4a7553e406ebe27da1b244dea93b167d9812e67"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.847540 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-v2wfd" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.849848 5117 generic.go:358] "Generic (PLEG): container finished" podID="c818e404-5db9-4c23-893d-1cd602c404aa" containerID="6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a" exitCode=0 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.849929 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" event={"ID":"c818e404-5db9-4c23-893d-1cd602c404aa","Type":"ContainerDied","Data":"6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.849956 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" event={"ID":"c818e404-5db9-4c23-893d-1cd602c404aa","Type":"ContainerDied","Data":"674de7850eb997754f71894a1bb1e386e01c9c7fa57a32236978c8ddcfe880ce"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.850399 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/marketplace-operator-547dbd544d-t67z9" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.851617 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" event={"ID":"50c35dac-9534-4fed-a34f-d2cabebef0e6","Type":"ContainerStarted","Data":"3ebb4c806b5a6ba598cd74f26beeb8483589e430baf9c9071b378b568788e954"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.861678 5117 scope.go:117] "RemoveContainer" containerID="b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.861760 5117 generic.go:358] "Generic (PLEG): container finished" podID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerID="2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803" exitCode=0 Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.861847 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmlw5" event={"ID":"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a","Type":"ContainerDied","Data":"2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.861897 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hmlw5" event={"ID":"5d31bbe8-9012-4c1b-8f77-3c795f1eef9a","Type":"ContainerDied","Data":"3120479a1d7e691dc210b3ea1fd9624c47a6d18b472792288001c37d706e23c1"} Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.861906 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hmlw5" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.875168 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-cgm87"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.878428 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-cgm87"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.893051 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-7qf4s"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.893295 5117 scope.go:117] "RemoveContainer" containerID="912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.899123 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-7qf4s"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.904553 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t67z9"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.912495 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/marketplace-operator-547dbd544d-t67z9"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.920687 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-hmlw5"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.922544 5117 scope.go:117] "RemoveContainer" containerID="b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7" Jan 23 09:00:36 crc kubenswrapper[5117]: E0123 09:00:36.922898 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7\": container with ID starting with b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7 not found: ID does not exist" containerID="b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.922928 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7"} err="failed to get container status \"b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7\": rpc error: code = NotFound desc = could not find container \"b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7\": container with ID starting with b426b71d334e52d5cae6c2c1bf4576a1ca7cc912604722d2bfeb6786782acec7 not found: ID does not exist" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.922949 5117 scope.go:117] "RemoveContainer" containerID="b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f" Jan 23 09:00:36 crc kubenswrapper[5117]: E0123 09:00:36.923203 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f\": container with ID starting with b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f not found: ID does not exist" containerID="b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.923234 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f"} err="failed to get container status \"b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f\": rpc error: code = NotFound desc = could not find container \"b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f\": container with ID starting with b1495bdce293025cfb4aaf61d674f2681f0dafffae0e3608d4f0631c3aa9b84f not found: ID does not exist" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.923253 5117 scope.go:117] "RemoveContainer" containerID="912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac" Jan 23 09:00:36 crc kubenswrapper[5117]: E0123 09:00:36.923491 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac\": container with ID starting with 912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac not found: ID does not exist" containerID="912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.923532 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac"} err="failed to get container status \"912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac\": rpc error: code = NotFound desc = could not find container \"912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac\": container with ID starting with 912bba9ff39f403c492a3b3b942a8c343daaf636479749d310e92d47910074ac not found: ID does not exist" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.923558 5117 scope.go:117] "RemoveContainer" containerID="6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.924997 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-hmlw5"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.928888 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2wfd"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.932378 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-v2wfd"] Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.939730 5117 scope.go:117] "RemoveContainer" containerID="f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.959092 5117 scope.go:117] "RemoveContainer" containerID="5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.987463 5117 scope.go:117] "RemoveContainer" containerID="6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e" Jan 23 09:00:36 crc kubenswrapper[5117]: E0123 09:00:36.988088 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e\": container with ID starting with 6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e not found: ID does not exist" containerID="6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.988190 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e"} err="failed to get container status \"6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e\": rpc error: code = NotFound desc = could not find container \"6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e\": container with ID starting with 6559246c3d337e3477607da272377fae157c32924f6871555ab397d293e2306e not found: ID does not exist" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.988240 5117 scope.go:117] "RemoveContainer" containerID="f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0" Jan 23 09:00:36 crc kubenswrapper[5117]: E0123 09:00:36.988765 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0\": container with ID starting with f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0 not found: ID does not exist" containerID="f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.988817 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0"} err="failed to get container status \"f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0\": rpc error: code = NotFound desc = could not find container \"f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0\": container with ID starting with f1a679430998baa1bb54bfe85b5138114087d676aebcdb2782dd317ee54ff3a0 not found: ID does not exist" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.988852 5117 scope.go:117] "RemoveContainer" containerID="5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb" Jan 23 09:00:36 crc kubenswrapper[5117]: E0123 09:00:36.989085 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb\": container with ID starting with 5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb not found: ID does not exist" containerID="5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.989103 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb"} err="failed to get container status \"5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb\": rpc error: code = NotFound desc = could not find container \"5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb\": container with ID starting with 5dd747dee4f09e9aa747d3fd559b074eb762d8c9d949907d77653ace7ad275bb not found: ID does not exist" Jan 23 09:00:36 crc kubenswrapper[5117]: I0123 09:00:36.989118 5117 scope.go:117] "RemoveContainer" containerID="00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.010712 5117 scope.go:117] "RemoveContainer" containerID="d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.026777 5117 scope.go:117] "RemoveContainer" containerID="399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.043100 5117 scope.go:117] "RemoveContainer" containerID="00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.043636 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0\": container with ID starting with 00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0 not found: ID does not exist" containerID="00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.043676 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0"} err="failed to get container status \"00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0\": rpc error: code = NotFound desc = could not find container \"00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0\": container with ID starting with 00470f260ee26298e437e5abe8c3c5150c7625c3213f8820e49c5f2c2d4ce9a0 not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.043702 5117 scope.go:117] "RemoveContainer" containerID="d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.044088 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501\": container with ID starting with d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501 not found: ID does not exist" containerID="d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.044152 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501"} err="failed to get container status \"d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501\": rpc error: code = NotFound desc = could not find container \"d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501\": container with ID starting with d13af3bf8507c3883cee83d6565f6720e051fd9143fe48d92228ec3bfbb5c501 not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.044178 5117 scope.go:117] "RemoveContainer" containerID="399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.044551 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3\": container with ID starting with 399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3 not found: ID does not exist" containerID="399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.044571 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3"} err="failed to get container status \"399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3\": rpc error: code = NotFound desc = could not find container \"399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3\": container with ID starting with 399d912add6edea10ee404e57cb1a821d278609f112a1d21c25bbdbacaf696b3 not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.044585 5117 scope.go:117] "RemoveContainer" containerID="6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.058554 5117 scope.go:117] "RemoveContainer" containerID="a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.077145 5117 scope.go:117] "RemoveContainer" containerID="6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.078009 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a\": container with ID starting with 6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a not found: ID does not exist" containerID="6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.078100 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a"} err="failed to get container status \"6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a\": rpc error: code = NotFound desc = could not find container \"6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a\": container with ID starting with 6ced458c0334e733064437f74b8d7bc3963092790b3f839e97508ce21ff8a37a not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.078173 5117 scope.go:117] "RemoveContainer" containerID="a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.078834 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9\": container with ID starting with a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9 not found: ID does not exist" containerID="a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.078873 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9"} err="failed to get container status \"a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9\": rpc error: code = NotFound desc = could not find container \"a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9\": container with ID starting with a8e3c2943ca88059bd322d5d6d187fb9050891c1537c0ba213d45bdd86a5c6b9 not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.078897 5117 scope.go:117] "RemoveContainer" containerID="2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.095818 5117 scope.go:117] "RemoveContainer" containerID="abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.111741 5117 scope.go:117] "RemoveContainer" containerID="0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.128888 5117 scope.go:117] "RemoveContainer" containerID="2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.129590 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803\": container with ID starting with 2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803 not found: ID does not exist" containerID="2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.129628 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803"} err="failed to get container status \"2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803\": rpc error: code = NotFound desc = could not find container \"2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803\": container with ID starting with 2263f6d6c25f68c11ad2cb9e7dd8dee5e645c552e41f5ce711941efd8d1d3803 not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.129652 5117 scope.go:117] "RemoveContainer" containerID="abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.130183 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2\": container with ID starting with abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2 not found: ID does not exist" containerID="abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.130244 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2"} err="failed to get container status \"abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2\": rpc error: code = NotFound desc = could not find container \"abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2\": container with ID starting with abedbc616a6f9e9dc89f8f9dd271aedf2bf01ba9696ed1c5fde5f328ff6d6ec2 not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.130284 5117 scope.go:117] "RemoveContainer" containerID="0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe" Jan 23 09:00:37 crc kubenswrapper[5117]: E0123 09:00:37.130682 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe\": container with ID starting with 0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe not found: ID does not exist" containerID="0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.130708 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe"} err="failed to get container status \"0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe\": rpc error: code = NotFound desc = could not find container \"0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe\": container with ID starting with 0d81a0db105ab68df493eedf36d374f4d19edd0aff0c434296bcd959d4e3d4fe not found: ID does not exist" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.872867 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" event={"ID":"50c35dac-9534-4fed-a34f-d2cabebef0e6","Type":"ContainerStarted","Data":"3eaeb2e30cf917c86b6021f1eeaf0923b2f8e6e9d459570e1db79c35c4c5d7f6"} Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.873215 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.878105 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" Jan 23 09:00:37 crc kubenswrapper[5117]: I0123 09:00:37.890922 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/marketplace-operator-547dbd544d-bfxhb" podStartSLOduration=1.8909044229999998 podStartE2EDuration="1.890904423s" podCreationTimestamp="2026-01-23 09:00:36 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:00:37.887850183 +0000 UTC m=+449.643975219" watchObservedRunningTime="2026-01-23 09:00:37.890904423 +0000 UTC m=+449.647029449" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325152 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-8st29"] Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325840 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325856 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325900 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325909 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325920 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325927 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325939 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325945 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325958 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325964 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325979 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325987 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.325996 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326003 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326012 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326019 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326031 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326038 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="extract-content" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326049 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326056 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326068 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326074 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326088 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326123 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326151 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326160 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326173 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326180 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="extract-utilities" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326309 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326320 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326334 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326344 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326383 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" containerName="registry-server" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.326679 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" containerName="marketplace-operator" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.379080 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8st29"] Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.379283 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.381832 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-marketplace-dockercfg-gg4w7\"" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.473898 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-fk7ms\" (UniqueName: \"kubernetes.io/projected/e4da0216-1017-40ba-b67e-1fdef15faa65-kube-api-access-fk7ms\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.473989 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-catalog-content\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.474048 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-utilities\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.531155 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-2wlmv"] Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.538034 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.544017 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"certified-operators-dockercfg-7cl8d\"" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.549194 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2wlmv"] Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.575020 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-utilities\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.575087 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-fk7ms\" (UniqueName: \"kubernetes.io/projected/e4da0216-1017-40ba-b67e-1fdef15faa65-kube-api-access-fk7ms\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.575119 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-catalog-content\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.575519 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-utilities\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.575609 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-catalog-content\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.596175 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-fk7ms\" (UniqueName: \"kubernetes.io/projected/e4da0216-1017-40ba-b67e-1fdef15faa65-kube-api-access-fk7ms\") pod \"redhat-marketplace-8st29\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.675840 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63caebec-168b-4984-9847-91f566659602-catalog-content\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.675900 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qk2hl\" (UniqueName: \"kubernetes.io/projected/63caebec-168b-4984-9847-91f566659602-kube-api-access-qk2hl\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.676197 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63caebec-168b-4984-9847-91f566659602-utilities\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.696508 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.776872 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63caebec-168b-4984-9847-91f566659602-utilities\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.777256 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63caebec-168b-4984-9847-91f566659602-catalog-content\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.777302 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qk2hl\" (UniqueName: \"kubernetes.io/projected/63caebec-168b-4984-9847-91f566659602-kube-api-access-qk2hl\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.777927 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/63caebec-168b-4984-9847-91f566659602-utilities\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.778366 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/63caebec-168b-4984-9847-91f566659602-catalog-content\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.779788 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="32b82517-9793-44f5-bc31-05ba0d27c553" path="/var/lib/kubelet/pods/32b82517-9793-44f5-bc31-05ba0d27c553/volumes" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.780708 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="5d31bbe8-9012-4c1b-8f77-3c795f1eef9a" path="/var/lib/kubelet/pods/5d31bbe8-9012-4c1b-8f77-3c795f1eef9a/volumes" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.781522 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="861a8fa4-2b12-475a-819b-74238f4d1a60" path="/var/lib/kubelet/pods/861a8fa4-2b12-475a-819b-74238f4d1a60/volumes" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.782802 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="aace2d11-7f7c-464c-b258-c61edb938e83" path="/var/lib/kubelet/pods/aace2d11-7f7c-464c-b258-c61edb938e83/volumes" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.783622 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="c818e404-5db9-4c23-893d-1cd602c404aa" path="/var/lib/kubelet/pods/c818e404-5db9-4c23-893d-1cd602c404aa/volumes" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.797558 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qk2hl\" (UniqueName: \"kubernetes.io/projected/63caebec-168b-4984-9847-91f566659602-kube-api-access-qk2hl\") pod \"certified-operators-2wlmv\" (UID: \"63caebec-168b-4984-9847-91f566659602\") " pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.854732 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:38 crc kubenswrapper[5117]: I0123 09:00:38.867400 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-8st29"] Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.049992 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-2wlmv"] Jan 23 09:00:39 crc kubenswrapper[5117]: W0123 09:00:39.104264 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod63caebec_168b_4984_9847_91f566659602.slice/crio-aa5c3f61ee3148631b6ea9937ca169e06aa450b3424eeeb1176734a40e7565ac WatchSource:0}: Error finding container aa5c3f61ee3148631b6ea9937ca169e06aa450b3424eeeb1176734a40e7565ac: Status 404 returned error can't find the container with id aa5c3f61ee3148631b6ea9937ca169e06aa450b3424eeeb1176734a40e7565ac Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.887336 5117 generic.go:358] "Generic (PLEG): container finished" podID="63caebec-168b-4984-9847-91f566659602" containerID="74554b351d7f6256b59716e658fac65c3a69edbc56854cc1a670916d361583bd" exitCode=0 Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.887404 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2wlmv" event={"ID":"63caebec-168b-4984-9847-91f566659602","Type":"ContainerDied","Data":"74554b351d7f6256b59716e658fac65c3a69edbc56854cc1a670916d361583bd"} Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.887882 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2wlmv" event={"ID":"63caebec-168b-4984-9847-91f566659602","Type":"ContainerStarted","Data":"aa5c3f61ee3148631b6ea9937ca169e06aa450b3424eeeb1176734a40e7565ac"} Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.893278 5117 generic.go:358] "Generic (PLEG): container finished" podID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerID="86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127" exitCode=0 Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.894569 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8st29" event={"ID":"e4da0216-1017-40ba-b67e-1fdef15faa65","Type":"ContainerDied","Data":"86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127"} Jan 23 09:00:39 crc kubenswrapper[5117]: I0123 09:00:39.894605 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8st29" event={"ID":"e4da0216-1017-40ba-b67e-1fdef15faa65","Type":"ContainerStarted","Data":"8bd2853da430fe2988f34edc417f9e075f26e11effa1094fe6098610880982d4"} Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.135958 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-94cgh"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.169283 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-94cgh"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.169510 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299267 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299360 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b4cf5809-82c3-4cd6-b149-f06e16944b1a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299514 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b4cf5809-82c3-4cd6-b149-f06e16944b1a-registry-certificates\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299569 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4cf5809-82c3-4cd6-b149-f06e16944b1a-trusted-ca\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299706 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-l4rlt\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-kube-api-access-l4rlt\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299770 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299898 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-registry-tls\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.299958 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b4cf5809-82c3-4cd6-b149-f06e16944b1a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.327581 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.400900 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b4cf5809-82c3-4cd6-b149-f06e16944b1a-registry-certificates\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401087 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4cf5809-82c3-4cd6-b149-f06e16944b1a-trusted-ca\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401125 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-l4rlt\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-kube-api-access-l4rlt\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401176 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401237 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-registry-tls\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401265 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b4cf5809-82c3-4cd6-b149-f06e16944b1a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401323 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b4cf5809-82c3-4cd6-b149-f06e16944b1a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.401810 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/b4cf5809-82c3-4cd6-b149-f06e16944b1a-ca-trust-extracted\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.402920 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/b4cf5809-82c3-4cd6-b149-f06e16944b1a-trusted-ca\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.403229 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/b4cf5809-82c3-4cd6-b149-f06e16944b1a-registry-certificates\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.413832 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-registry-tls\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.414220 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/b4cf5809-82c3-4cd6-b149-f06e16944b1a-installation-pull-secrets\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.419461 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-l4rlt\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-kube-api-access-l4rlt\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.424005 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/b4cf5809-82c3-4cd6-b149-f06e16944b1a-bound-sa-token\") pod \"image-registry-5d9d95bf5b-94cgh\" (UID: \"b4cf5809-82c3-4cd6-b149-f06e16944b1a\") " pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.487624 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.725784 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-z5dbk"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.735391 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.737859 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5dbk"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.738398 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"redhat-operators-dockercfg-9gxlh\"" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.742676 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-image-registry/image-registry-5d9d95bf5b-94cgh"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.806931 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xx6b8\" (UniqueName: \"kubernetes.io/projected/d4898852-a834-401f-9f74-6d33c221a86f-kube-api-access-xx6b8\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.807345 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4898852-a834-401f-9f74-6d33c221a86f-utilities\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.807365 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4898852-a834-401f-9f74-6d33c221a86f-catalog-content\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.899633 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2wlmv" event={"ID":"63caebec-168b-4984-9847-91f566659602","Type":"ContainerStarted","Data":"e90f69a0e05a5479defb272fe3f2421d5ca6fb56010ddff621484078cf5a2504"} Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.906441 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" event={"ID":"b4cf5809-82c3-4cd6-b149-f06e16944b1a","Type":"ContainerStarted","Data":"0d198489510c0185d22aea4e61df477980d308cb603f872ea77369c1f1a480a5"} Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.910860 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xx6b8\" (UniqueName: \"kubernetes.io/projected/d4898852-a834-401f-9f74-6d33c221a86f-kube-api-access-xx6b8\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.910976 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4898852-a834-401f-9f74-6d33c221a86f-utilities\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.911000 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4898852-a834-401f-9f74-6d33c221a86f-catalog-content\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.911662 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d4898852-a834-401f-9f74-6d33c221a86f-utilities\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.913013 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d4898852-a834-401f-9f74-6d33c221a86f-catalog-content\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.930525 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-5mslb"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.941524 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mslb"] Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.941701 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.946964 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"community-operators-dockercfg-vrd5f\"" Jan 23 09:00:40 crc kubenswrapper[5117]: I0123 09:00:40.947128 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xx6b8\" (UniqueName: \"kubernetes.io/projected/d4898852-a834-401f-9f74-6d33c221a86f-kube-api-access-xx6b8\") pod \"redhat-operators-z5dbk\" (UID: \"d4898852-a834-401f-9f74-6d33c221a86f\") " pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.012674 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-spg5d\" (UniqueName: \"kubernetes.io/projected/93a67942-29ab-4f71-877a-c5e802cb444d-kube-api-access-spg5d\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.012769 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93a67942-29ab-4f71-877a-c5e802cb444d-catalog-content\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.012805 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93a67942-29ab-4f71-877a-c5e802cb444d-utilities\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.055801 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.118697 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-spg5d\" (UniqueName: \"kubernetes.io/projected/93a67942-29ab-4f71-877a-c5e802cb444d-kube-api-access-spg5d\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.118765 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93a67942-29ab-4f71-877a-c5e802cb444d-catalog-content\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.118783 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93a67942-29ab-4f71-877a-c5e802cb444d-utilities\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.119408 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/93a67942-29ab-4f71-877a-c5e802cb444d-utilities\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.119933 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/93a67942-29ab-4f71-877a-c5e802cb444d-catalog-content\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.143790 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-spg5d\" (UniqueName: \"kubernetes.io/projected/93a67942-29ab-4f71-877a-c5e802cb444d-kube-api-access-spg5d\") pod \"community-operators-5mslb\" (UID: \"93a67942-29ab-4f71-877a-c5e802cb444d\") " pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.271243 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.285628 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-z5dbk"] Jan 23 09:00:41 crc kubenswrapper[5117]: W0123 09:00:41.295396 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podd4898852_a834_401f_9f74_6d33c221a86f.slice/crio-5c21c280b2cac2cec509258763833c289a42316686ceb910a79044f2cf6ee7e2 WatchSource:0}: Error finding container 5c21c280b2cac2cec509258763833c289a42316686ceb910a79044f2cf6ee7e2: Status 404 returned error can't find the container with id 5c21c280b2cac2cec509258763833c289a42316686ceb910a79044f2cf6ee7e2 Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.458078 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-5mslb"] Jan 23 09:00:41 crc kubenswrapper[5117]: W0123 09:00:41.516877 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod93a67942_29ab_4f71_877a_c5e802cb444d.slice/crio-bf5ce4dfbb8c99deffaa88878cabb8b0b1eb3952da5feaafedca76bc37d73684 WatchSource:0}: Error finding container bf5ce4dfbb8c99deffaa88878cabb8b0b1eb3952da5feaafedca76bc37d73684: Status 404 returned error can't find the container with id bf5ce4dfbb8c99deffaa88878cabb8b0b1eb3952da5feaafedca76bc37d73684 Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.914179 5117 generic.go:358] "Generic (PLEG): container finished" podID="63caebec-168b-4984-9847-91f566659602" containerID="e90f69a0e05a5479defb272fe3f2421d5ca6fb56010ddff621484078cf5a2504" exitCode=0 Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.914470 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2wlmv" event={"ID":"63caebec-168b-4984-9847-91f566659602","Type":"ContainerDied","Data":"e90f69a0e05a5479defb272fe3f2421d5ca6fb56010ddff621484078cf5a2504"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.917934 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" event={"ID":"b4cf5809-82c3-4cd6-b149-f06e16944b1a","Type":"ContainerStarted","Data":"7827af2bde8397a28f41c16e0c7d6b72e8ca2614fad701859fb3835763e791b1"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.918077 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.920208 5117 generic.go:358] "Generic (PLEG): container finished" podID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerID="31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76" exitCode=0 Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.920257 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8st29" event={"ID":"e4da0216-1017-40ba-b67e-1fdef15faa65","Type":"ContainerDied","Data":"31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.923114 5117 generic.go:358] "Generic (PLEG): container finished" podID="d4898852-a834-401f-9f74-6d33c221a86f" containerID="e23897ac6ea6876e1823eca2b6db2a0cb79cb5971f6a395b564d433250f4ef6f" exitCode=0 Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.923167 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5dbk" event={"ID":"d4898852-a834-401f-9f74-6d33c221a86f","Type":"ContainerDied","Data":"e23897ac6ea6876e1823eca2b6db2a0cb79cb5971f6a395b564d433250f4ef6f"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.923198 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5dbk" event={"ID":"d4898852-a834-401f-9f74-6d33c221a86f","Type":"ContainerStarted","Data":"5c21c280b2cac2cec509258763833c289a42316686ceb910a79044f2cf6ee7e2"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.926251 5117 generic.go:358] "Generic (PLEG): container finished" podID="93a67942-29ab-4f71-877a-c5e802cb444d" containerID="975c2518a39292c5c4fa704cc352e18f580517970d92ab9c81bfabe790e9acda" exitCode=0 Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.926310 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mslb" event={"ID":"93a67942-29ab-4f71-877a-c5e802cb444d","Type":"ContainerDied","Data":"975c2518a39292c5c4fa704cc352e18f580517970d92ab9c81bfabe790e9acda"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.926347 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mslb" event={"ID":"93a67942-29ab-4f71-877a-c5e802cb444d","Type":"ContainerStarted","Data":"bf5ce4dfbb8c99deffaa88878cabb8b0b1eb3952da5feaafedca76bc37d73684"} Jan 23 09:00:41 crc kubenswrapper[5117]: I0123 09:00:41.987599 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" podStartSLOduration=1.987581958 podStartE2EDuration="1.987581958s" podCreationTimestamp="2026-01-23 09:00:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:00:41.980903883 +0000 UTC m=+453.737028949" watchObservedRunningTime="2026-01-23 09:00:41.987581958 +0000 UTC m=+453.743707004" Jan 23 09:00:42 crc kubenswrapper[5117]: I0123 09:00:42.934027 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-2wlmv" event={"ID":"63caebec-168b-4984-9847-91f566659602","Type":"ContainerStarted","Data":"ea735c0334330566446ed1bcf6b8f8c1721ad4cf4d2b9f105d8059763ecd1bed"} Jan 23 09:00:42 crc kubenswrapper[5117]: I0123 09:00:42.936396 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8st29" event={"ID":"e4da0216-1017-40ba-b67e-1fdef15faa65","Type":"ContainerStarted","Data":"08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f"} Jan 23 09:00:42 crc kubenswrapper[5117]: I0123 09:00:42.938070 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5dbk" event={"ID":"d4898852-a834-401f-9f74-6d33c221a86f","Type":"ContainerStarted","Data":"37194794a2e11aad15435bcbbc301d25a03cff3c51581753f5cc4f00c8badcf7"} Jan 23 09:00:42 crc kubenswrapper[5117]: I0123 09:00:42.940457 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mslb" event={"ID":"93a67942-29ab-4f71-877a-c5e802cb444d","Type":"ContainerStarted","Data":"b93ad59f94ecc69435389ae029c5d16555aba6eebfb713893c5ddba010601b97"} Jan 23 09:00:42 crc kubenswrapper[5117]: I0123 09:00:42.955475 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-2wlmv" podStartSLOduration=4.18127111 podStartE2EDuration="4.955453041s" podCreationTimestamp="2026-01-23 09:00:38 +0000 UTC" firstStartedPulling="2026-01-23 09:00:39.888792895 +0000 UTC m=+451.644917931" lastFinishedPulling="2026-01-23 09:00:40.662974836 +0000 UTC m=+452.419099862" observedRunningTime="2026-01-23 09:00:42.952773882 +0000 UTC m=+454.708898908" watchObservedRunningTime="2026-01-23 09:00:42.955453041 +0000 UTC m=+454.711578067" Jan 23 09:00:42 crc kubenswrapper[5117]: I0123 09:00:42.972043 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-8st29" podStartSLOduration=4.114217459 podStartE2EDuration="4.972022225s" podCreationTimestamp="2026-01-23 09:00:38 +0000 UTC" firstStartedPulling="2026-01-23 09:00:39.895365707 +0000 UTC m=+451.651490733" lastFinishedPulling="2026-01-23 09:00:40.753170473 +0000 UTC m=+452.509295499" observedRunningTime="2026-01-23 09:00:42.970120289 +0000 UTC m=+454.726245325" watchObservedRunningTime="2026-01-23 09:00:42.972022225 +0000 UTC m=+454.728147251" Jan 23 09:00:43 crc kubenswrapper[5117]: I0123 09:00:43.947677 5117 generic.go:358] "Generic (PLEG): container finished" podID="93a67942-29ab-4f71-877a-c5e802cb444d" containerID="b93ad59f94ecc69435389ae029c5d16555aba6eebfb713893c5ddba010601b97" exitCode=0 Jan 23 09:00:43 crc kubenswrapper[5117]: I0123 09:00:43.948185 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mslb" event={"ID":"93a67942-29ab-4f71-877a-c5e802cb444d","Type":"ContainerDied","Data":"b93ad59f94ecc69435389ae029c5d16555aba6eebfb713893c5ddba010601b97"} Jan 23 09:00:43 crc kubenswrapper[5117]: I0123 09:00:43.953402 5117 generic.go:358] "Generic (PLEG): container finished" podID="d4898852-a834-401f-9f74-6d33c221a86f" containerID="37194794a2e11aad15435bcbbc301d25a03cff3c51581753f5cc4f00c8badcf7" exitCode=0 Jan 23 09:00:43 crc kubenswrapper[5117]: I0123 09:00:43.954345 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5dbk" event={"ID":"d4898852-a834-401f-9f74-6d33c221a86f","Type":"ContainerDied","Data":"37194794a2e11aad15435bcbbc301d25a03cff3c51581753f5cc4f00c8badcf7"} Jan 23 09:00:44 crc kubenswrapper[5117]: I0123 09:00:44.964822 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-z5dbk" event={"ID":"d4898852-a834-401f-9f74-6d33c221a86f","Type":"ContainerStarted","Data":"b035c5f1c5f823e41e071b65ce35578dbf235a611b0f29afc60ab27e55519d8e"} Jan 23 09:00:44 crc kubenswrapper[5117]: I0123 09:00:44.967671 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-5mslb" event={"ID":"93a67942-29ab-4f71-877a-c5e802cb444d","Type":"ContainerStarted","Data":"2b799dba702464ab086f6578daa6ec6a21f6dcf730fa83f6803bc62de792a5ac"} Jan 23 09:00:44 crc kubenswrapper[5117]: I0123 09:00:44.985906 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-z5dbk" podStartSLOduration=4.36826262 podStartE2EDuration="4.985885085s" podCreationTimestamp="2026-01-23 09:00:40 +0000 UTC" firstStartedPulling="2026-01-23 09:00:41.923958258 +0000 UTC m=+453.680083284" lastFinishedPulling="2026-01-23 09:00:42.541580713 +0000 UTC m=+454.297705749" observedRunningTime="2026-01-23 09:00:44.982585479 +0000 UTC m=+456.738710525" watchObservedRunningTime="2026-01-23 09:00:44.985885085 +0000 UTC m=+456.742010121" Jan 23 09:00:44 crc kubenswrapper[5117]: I0123 09:00:44.999408 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-5mslb" podStartSLOduration=4.204270046 podStartE2EDuration="4.99939196s" podCreationTimestamp="2026-01-23 09:00:40 +0000 UTC" firstStartedPulling="2026-01-23 09:00:41.92949342 +0000 UTC m=+453.685618446" lastFinishedPulling="2026-01-23 09:00:42.724615324 +0000 UTC m=+454.480740360" observedRunningTime="2026-01-23 09:00:44.997989549 +0000 UTC m=+456.754114585" watchObservedRunningTime="2026-01-23 09:00:44.99939196 +0000 UTC m=+456.755516996" Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.062890 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.062972 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.063028 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.063617 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"8a985eb1e9e04c52887c77708e8b5b6b0c0bbd9b01592af6ad6940a52462f607"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.063681 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://8a985eb1e9e04c52887c77708e8b5b6b0c0bbd9b01592af6ad6940a52462f607" gracePeriod=600 Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.974405 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="8a985eb1e9e04c52887c77708e8b5b6b0c0bbd9b01592af6ad6940a52462f607" exitCode=0 Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.976381 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"8a985eb1e9e04c52887c77708e8b5b6b0c0bbd9b01592af6ad6940a52462f607"} Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.976425 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"b97661a90bbce2393db26e77bd29a143f0d98233c5c69fccbeb1b1da6802d3e6"} Jan 23 09:00:45 crc kubenswrapper[5117]: I0123 09:00:45.976449 5117 scope.go:117] "RemoveContainer" containerID="c1ef916fc9ddc28b4cf6d39c794f56acc3529533f1acf6dc7245d743afdd645b" Jan 23 09:00:48 crc kubenswrapper[5117]: I0123 09:00:48.697686 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:48 crc kubenswrapper[5117]: I0123 09:00:48.698630 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:48 crc kubenswrapper[5117]: I0123 09:00:48.753424 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:48 crc kubenswrapper[5117]: I0123 09:00:48.855337 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:48 crc kubenswrapper[5117]: I0123 09:00:48.855418 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:48 crc kubenswrapper[5117]: I0123 09:00:48.897623 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:49 crc kubenswrapper[5117]: I0123 09:00:49.038316 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-2wlmv" Jan 23 09:00:49 crc kubenswrapper[5117]: I0123 09:00:49.040564 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:00:51 crc kubenswrapper[5117]: I0123 09:00:51.056526 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:51 crc kubenswrapper[5117]: I0123 09:00:51.057010 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:51 crc kubenswrapper[5117]: I0123 09:00:51.099494 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:51 crc kubenswrapper[5117]: I0123 09:00:51.272620 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:51 crc kubenswrapper[5117]: I0123 09:00:51.273520 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:51 crc kubenswrapper[5117]: I0123 09:00:51.309684 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:00:52 crc kubenswrapper[5117]: I0123 09:00:52.050724 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-z5dbk" Jan 23 09:00:52 crc kubenswrapper[5117]: I0123 09:00:52.052729 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-5mslb" Jan 23 09:01:03 crc kubenswrapper[5117]: I0123 09:01:02.948693 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-image-registry/image-registry-5d9d95bf5b-94cgh" Jan 23 09:01:03 crc kubenswrapper[5117]: I0123 09:01:03.002739 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nsh7s"] Jan 23 09:01:28 crc kubenswrapper[5117]: I0123 09:01:28.049693 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" podUID="3668af23-b087-479a-b9d8-d6e8b963ce57" containerName="registry" containerID="cri-o://55bd5d9f253c764c9de68c6b537afb1198b3effe91f5a7d83788f444010a799b" gracePeriod=30 Jan 23 09:01:28 crc kubenswrapper[5117]: I0123 09:01:28.228152 5117 generic.go:358] "Generic (PLEG): container finished" podID="3668af23-b087-479a-b9d8-d6e8b963ce57" containerID="55bd5d9f253c764c9de68c6b537afb1198b3effe91f5a7d83788f444010a799b" exitCode=0 Jan 23 09:01:28 crc kubenswrapper[5117]: I0123 09:01:28.228259 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" event={"ID":"3668af23-b087-479a-b9d8-d6e8b963ce57","Type":"ContainerDied","Data":"55bd5d9f253c764c9de68c6b537afb1198b3effe91f5a7d83788f444010a799b"} Jan 23 09:01:28 crc kubenswrapper[5117]: I0123 09:01:28.893662 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.028722 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-trusted-ca\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.028770 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-bound-sa-token\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.028798 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-certificates\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.028834 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-tls\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.028875 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-pgpkx\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-kube-api-access-pgpkx\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.028982 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"registry-storage\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.029029 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3668af23-b087-479a-b9d8-d6e8b963ce57-installation-pull-secrets\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.029070 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3668af23-b087-479a-b9d8-d6e8b963ce57-ca-trust-extracted\") pod \"3668af23-b087-479a-b9d8-d6e8b963ce57\" (UID: \"3668af23-b087-479a-b9d8-d6e8b963ce57\") " Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.030819 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-certificates" (OuterVolumeSpecName: "registry-certificates") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "registry-certificates". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.032798 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-trusted-ca" (OuterVolumeSpecName: "trusted-ca") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "trusted-ca". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.036418 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-kube-api-access-pgpkx" (OuterVolumeSpecName: "kube-api-access-pgpkx") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "kube-api-access-pgpkx". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.036590 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-bound-sa-token" (OuterVolumeSpecName: "bound-sa-token") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "bound-sa-token". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.036551 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3668af23-b087-479a-b9d8-d6e8b963ce57-installation-pull-secrets" (OuterVolumeSpecName: "installation-pull-secrets") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "installation-pull-secrets". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.037355 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-tls" (OuterVolumeSpecName: "registry-tls") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "registry-tls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.041839 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2" (OuterVolumeSpecName: "registry-storage") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "pvc-b21f41aa-58d4-44b1-aeaa-280a8e32ddf2". PluginName "kubernetes.io/csi", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.058742 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3668af23-b087-479a-b9d8-d6e8b963ce57-ca-trust-extracted" (OuterVolumeSpecName: "ca-trust-extracted") pod "3668af23-b087-479a-b9d8-d6e8b963ce57" (UID: "3668af23-b087-479a-b9d8-d6e8b963ce57"). InnerVolumeSpecName "ca-trust-extracted". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130901 5117 reconciler_common.go:299] "Volume detached for volume \"trusted-ca\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-trusted-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130931 5117 reconciler_common.go:299] "Volume detached for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-bound-sa-token\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130941 5117 reconciler_common.go:299] "Volume detached for volume \"registry-certificates\" (UniqueName: \"kubernetes.io/configmap/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-certificates\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130956 5117 reconciler_common.go:299] "Volume detached for volume \"registry-tls\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-registry-tls\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130975 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-pgpkx\" (UniqueName: \"kubernetes.io/projected/3668af23-b087-479a-b9d8-d6e8b963ce57-kube-api-access-pgpkx\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130985 5117 reconciler_common.go:299] "Volume detached for volume \"installation-pull-secrets\" (UniqueName: \"kubernetes.io/secret/3668af23-b087-479a-b9d8-d6e8b963ce57-installation-pull-secrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.130993 5117 reconciler_common.go:299] "Volume detached for volume \"ca-trust-extracted\" (UniqueName: \"kubernetes.io/empty-dir/3668af23-b087-479a-b9d8-d6e8b963ce57-ca-trust-extracted\") on node \"crc\" DevicePath \"\"" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.235724 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.235757 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-image-registry/image-registry-66587d64c8-nsh7s" event={"ID":"3668af23-b087-479a-b9d8-d6e8b963ce57","Type":"ContainerDied","Data":"bfe401a096a959c41eda2119b14b3face2ee34802d4c99e4b0848a7cb2580fba"} Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.235874 5117 scope.go:117] "RemoveContainer" containerID="55bd5d9f253c764c9de68c6b537afb1198b3effe91f5a7d83788f444010a799b" Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.272401 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nsh7s"] Jan 23 09:01:29 crc kubenswrapper[5117]: I0123 09:01:29.278692 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-image-registry/image-registry-66587d64c8-nsh7s"] Jan 23 09:01:30 crc kubenswrapper[5117]: I0123 09:01:30.777324 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3668af23-b087-479a-b9d8-d6e8b963ce57" path="/var/lib/kubelet/pods/3668af23-b087-479a-b9d8-d6e8b963ce57/volumes" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.135454 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485982-bp52j"] Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.136712 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3668af23-b087-479a-b9d8-d6e8b963ce57" containerName="registry" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.136730 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3668af23-b087-479a-b9d8-d6e8b963ce57" containerName="registry" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.136851 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3668af23-b087-479a-b9d8-d6e8b963ce57" containerName="registry" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.249650 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485982-bp52j"] Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.249800 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.253178 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.253414 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.258241 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.365638 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wlr2q\" (UniqueName: \"kubernetes.io/projected/6fdf379f-c969-4262-aa1c-159c476a2e9f-kube-api-access-wlr2q\") pod \"auto-csr-approver-29485982-bp52j\" (UID: \"6fdf379f-c969-4262-aa1c-159c476a2e9f\") " pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.466953 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wlr2q\" (UniqueName: \"kubernetes.io/projected/6fdf379f-c969-4262-aa1c-159c476a2e9f-kube-api-access-wlr2q\") pod \"auto-csr-approver-29485982-bp52j\" (UID: \"6fdf379f-c969-4262-aa1c-159c476a2e9f\") " pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.489676 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wlr2q\" (UniqueName: \"kubernetes.io/projected/6fdf379f-c969-4262-aa1c-159c476a2e9f-kube-api-access-wlr2q\") pod \"auto-csr-approver-29485982-bp52j\" (UID: \"6fdf379f-c969-4262-aa1c-159c476a2e9f\") " pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.572014 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:00 crc kubenswrapper[5117]: I0123 09:02:00.760104 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485982-bp52j"] Jan 23 09:02:01 crc kubenswrapper[5117]: I0123 09:02:01.413421 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485982-bp52j" event={"ID":"6fdf379f-c969-4262-aa1c-159c476a2e9f","Type":"ContainerStarted","Data":"97f3acd26931b90254bbd9e871bf980628b358c21373538ede58dbe8b70b0bd5"} Jan 23 09:02:04 crc kubenswrapper[5117]: I0123 09:02:04.430679 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485982-bp52j" event={"ID":"6fdf379f-c969-4262-aa1c-159c476a2e9f","Type":"ContainerStarted","Data":"a79f864d8e4a0e465084f4083835d81bf3a949df7f3b25859cabcec4a25fe78f"} Jan 23 09:02:04 crc kubenswrapper[5117]: I0123 09:02:04.443243 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29485982-bp52j" podStartSLOduration=1.089194423 podStartE2EDuration="4.443200261s" podCreationTimestamp="2026-01-23 09:02:00 +0000 UTC" firstStartedPulling="2026-01-23 09:02:00.780871905 +0000 UTC m=+532.536996941" lastFinishedPulling="2026-01-23 09:02:04.134877733 +0000 UTC m=+535.891002779" observedRunningTime="2026-01-23 09:02:04.443063207 +0000 UTC m=+536.199188233" watchObservedRunningTime="2026-01-23 09:02:04.443200261 +0000 UTC m=+536.199325287" Jan 23 09:02:04 crc kubenswrapper[5117]: I0123 09:02:04.777245 5117 csr.go:274] "Certificate signing request is approved, waiting to be issued" logger="kubernetes.io/kubelet-serving" csr="csr-qpgnk" Jan 23 09:02:04 crc kubenswrapper[5117]: I0123 09:02:04.803601 5117 csr.go:270] "Certificate signing request is issued" logger="kubernetes.io/kubelet-serving" csr="csr-qpgnk" Jan 23 09:02:05 crc kubenswrapper[5117]: I0123 09:02:05.448296 5117 generic.go:358] "Generic (PLEG): container finished" podID="6fdf379f-c969-4262-aa1c-159c476a2e9f" containerID="a79f864d8e4a0e465084f4083835d81bf3a949df7f3b25859cabcec4a25fe78f" exitCode=0 Jan 23 09:02:05 crc kubenswrapper[5117]: I0123 09:02:05.448593 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485982-bp52j" event={"ID":"6fdf379f-c969-4262-aa1c-159c476a2e9f","Type":"ContainerDied","Data":"a79f864d8e4a0e465084f4083835d81bf3a949df7f3b25859cabcec4a25fe78f"} Jan 23 09:02:05 crc kubenswrapper[5117]: I0123 09:02:05.805083 5117 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-22 08:57:04 +0000 UTC" deadline="2026-02-14 05:09:46.978528911 +0000 UTC" Jan 23 09:02:05 crc kubenswrapper[5117]: I0123 09:02:05.805163 5117 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="524h7m41.173370366s" Jan 23 09:02:06 crc kubenswrapper[5117]: I0123 09:02:06.680667 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:06 crc kubenswrapper[5117]: I0123 09:02:06.706181 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wlr2q\" (UniqueName: \"kubernetes.io/projected/6fdf379f-c969-4262-aa1c-159c476a2e9f-kube-api-access-wlr2q\") pod \"6fdf379f-c969-4262-aa1c-159c476a2e9f\" (UID: \"6fdf379f-c969-4262-aa1c-159c476a2e9f\") " Jan 23 09:02:06 crc kubenswrapper[5117]: I0123 09:02:06.712214 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6fdf379f-c969-4262-aa1c-159c476a2e9f-kube-api-access-wlr2q" (OuterVolumeSpecName: "kube-api-access-wlr2q") pod "6fdf379f-c969-4262-aa1c-159c476a2e9f" (UID: "6fdf379f-c969-4262-aa1c-159c476a2e9f"). InnerVolumeSpecName "kube-api-access-wlr2q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:02:06 crc kubenswrapper[5117]: I0123 09:02:06.805259 5117 certificate_manager.go:715] "Certificate rotation deadline determined" logger="kubernetes.io/kubelet-serving" expiration="2026-02-22 08:57:04 +0000 UTC" deadline="2026-02-19 00:39:23.903974988 +0000 UTC" Jan 23 09:02:06 crc kubenswrapper[5117]: I0123 09:02:06.805579 5117 certificate_manager.go:431] "Waiting for next certificate rotation" logger="kubernetes.io/kubelet-serving" sleep="639h37m17.098398847s" Jan 23 09:02:06 crc kubenswrapper[5117]: I0123 09:02:06.807889 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wlr2q\" (UniqueName: \"kubernetes.io/projected/6fdf379f-c969-4262-aa1c-159c476a2e9f-kube-api-access-wlr2q\") on node \"crc\" DevicePath \"\"" Jan 23 09:02:07 crc kubenswrapper[5117]: I0123 09:02:07.466057 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485982-bp52j" event={"ID":"6fdf379f-c969-4262-aa1c-159c476a2e9f","Type":"ContainerDied","Data":"97f3acd26931b90254bbd9e871bf980628b358c21373538ede58dbe8b70b0bd5"} Jan 23 09:02:07 crc kubenswrapper[5117]: I0123 09:02:07.466103 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="97f3acd26931b90254bbd9e871bf980628b358c21373538ede58dbe8b70b0bd5" Jan 23 09:02:07 crc kubenswrapper[5117]: I0123 09:02:07.466189 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485982-bp52j" Jan 23 09:02:45 crc kubenswrapper[5117]: I0123 09:02:45.063018 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:02:45 crc kubenswrapper[5117]: I0123 09:02:45.063621 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:03:09 crc kubenswrapper[5117]: I0123 09:03:09.080103 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:03:09 crc kubenswrapper[5117]: I0123 09:03:09.080925 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:03:15 crc kubenswrapper[5117]: I0123 09:03:15.062804 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:03:15 crc kubenswrapper[5117]: I0123 09:03:15.063095 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:03:45 crc kubenswrapper[5117]: I0123 09:03:45.063042 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:03:45 crc kubenswrapper[5117]: I0123 09:03:45.063551 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:03:45 crc kubenswrapper[5117]: I0123 09:03:45.063600 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:03:45 crc kubenswrapper[5117]: I0123 09:03:45.064201 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"b97661a90bbce2393db26e77bd29a143f0d98233c5c69fccbeb1b1da6802d3e6"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:03:45 crc kubenswrapper[5117]: I0123 09:03:45.064289 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://b97661a90bbce2393db26e77bd29a143f0d98233c5c69fccbeb1b1da6802d3e6" gracePeriod=600 Jan 23 09:03:45 crc kubenswrapper[5117]: I0123 09:03:45.197265 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:03:46 crc kubenswrapper[5117]: I0123 09:03:46.116195 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="b97661a90bbce2393db26e77bd29a143f0d98233c5c69fccbeb1b1da6802d3e6" exitCode=0 Jan 23 09:03:46 crc kubenswrapper[5117]: I0123 09:03:46.116264 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"b97661a90bbce2393db26e77bd29a143f0d98233c5c69fccbeb1b1da6802d3e6"} Jan 23 09:03:46 crc kubenswrapper[5117]: I0123 09:03:46.116337 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"652ecc0b605bacfe2a16dd899d40ace6b8068602de3571538cddd08bf11b07a4"} Jan 23 09:03:46 crc kubenswrapper[5117]: I0123 09:03:46.116362 5117 scope.go:117] "RemoveContainer" containerID="8a985eb1e9e04c52887c77708e8b5b6b0c0bbd9b01592af6ad6940a52462f607" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.149278 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485984-sm46f"] Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.150579 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6fdf379f-c969-4262-aa1c-159c476a2e9f" containerName="oc" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.150600 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6fdf379f-c969-4262-aa1c-159c476a2e9f" containerName="oc" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.150754 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="6fdf379f-c969-4262-aa1c-159c476a2e9f" containerName="oc" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.163179 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485984-sm46f"] Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.163339 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.166683 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.166746 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.167555 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.262586 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-szrwc\" (UniqueName: \"kubernetes.io/projected/eb384748-18de-4780-81ca-0bb474334d96-kube-api-access-szrwc\") pod \"auto-csr-approver-29485984-sm46f\" (UID: \"eb384748-18de-4780-81ca-0bb474334d96\") " pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.364336 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-szrwc\" (UniqueName: \"kubernetes.io/projected/eb384748-18de-4780-81ca-0bb474334d96-kube-api-access-szrwc\") pod \"auto-csr-approver-29485984-sm46f\" (UID: \"eb384748-18de-4780-81ca-0bb474334d96\") " pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.395645 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-szrwc\" (UniqueName: \"kubernetes.io/projected/eb384748-18de-4780-81ca-0bb474334d96-kube-api-access-szrwc\") pod \"auto-csr-approver-29485984-sm46f\" (UID: \"eb384748-18de-4780-81ca-0bb474334d96\") " pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.483303 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:00 crc kubenswrapper[5117]: I0123 09:04:00.665480 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485984-sm46f"] Jan 23 09:04:00 crc kubenswrapper[5117]: W0123 09:04:00.669509 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podeb384748_18de_4780_81ca_0bb474334d96.slice/crio-c33d3e2fe37bd24fed962b870b3d4a540688ae184d6e6de67988dab88f5d2389 WatchSource:0}: Error finding container c33d3e2fe37bd24fed962b870b3d4a540688ae184d6e6de67988dab88f5d2389: Status 404 returned error can't find the container with id c33d3e2fe37bd24fed962b870b3d4a540688ae184d6e6de67988dab88f5d2389 Jan 23 09:04:01 crc kubenswrapper[5117]: I0123 09:04:01.205901 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485984-sm46f" event={"ID":"eb384748-18de-4780-81ca-0bb474334d96","Type":"ContainerStarted","Data":"c33d3e2fe37bd24fed962b870b3d4a540688ae184d6e6de67988dab88f5d2389"} Jan 23 09:04:02 crc kubenswrapper[5117]: I0123 09:04:02.213323 5117 generic.go:358] "Generic (PLEG): container finished" podID="eb384748-18de-4780-81ca-0bb474334d96" containerID="e09028e6d51bf24dac649c6f231c3d3d5d0822c929cb256ad19188bb3f612cf8" exitCode=0 Jan 23 09:04:02 crc kubenswrapper[5117]: I0123 09:04:02.213434 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485984-sm46f" event={"ID":"eb384748-18de-4780-81ca-0bb474334d96","Type":"ContainerDied","Data":"e09028e6d51bf24dac649c6f231c3d3d5d0822c929cb256ad19188bb3f612cf8"} Jan 23 09:04:03 crc kubenswrapper[5117]: I0123 09:04:03.443372 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:03 crc kubenswrapper[5117]: I0123 09:04:03.514515 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-szrwc\" (UniqueName: \"kubernetes.io/projected/eb384748-18de-4780-81ca-0bb474334d96-kube-api-access-szrwc\") pod \"eb384748-18de-4780-81ca-0bb474334d96\" (UID: \"eb384748-18de-4780-81ca-0bb474334d96\") " Jan 23 09:04:03 crc kubenswrapper[5117]: I0123 09:04:03.523888 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/eb384748-18de-4780-81ca-0bb474334d96-kube-api-access-szrwc" (OuterVolumeSpecName: "kube-api-access-szrwc") pod "eb384748-18de-4780-81ca-0bb474334d96" (UID: "eb384748-18de-4780-81ca-0bb474334d96"). InnerVolumeSpecName "kube-api-access-szrwc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:04:03 crc kubenswrapper[5117]: I0123 09:04:03.616370 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-szrwc\" (UniqueName: \"kubernetes.io/projected/eb384748-18de-4780-81ca-0bb474334d96-kube-api-access-szrwc\") on node \"crc\" DevicePath \"\"" Jan 23 09:04:04 crc kubenswrapper[5117]: I0123 09:04:04.230175 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485984-sm46f" event={"ID":"eb384748-18de-4780-81ca-0bb474334d96","Type":"ContainerDied","Data":"c33d3e2fe37bd24fed962b870b3d4a540688ae184d6e6de67988dab88f5d2389"} Jan 23 09:04:04 crc kubenswrapper[5117]: I0123 09:04:04.230200 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485984-sm46f" Jan 23 09:04:04 crc kubenswrapper[5117]: I0123 09:04:04.230214 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c33d3e2fe37bd24fed962b870b3d4a540688ae184d6e6de67988dab88f5d2389" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.566052 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-wcgk2"] Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.567301 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="eb384748-18de-4780-81ca-0bb474334d96" containerName="oc" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.567318 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="eb384748-18de-4780-81ca-0bb474334d96" containerName="oc" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.567442 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="eb384748-18de-4780-81ca-0bb474334d96" containerName="oc" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.589850 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wcgk2"] Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.590040 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.681748 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mbkvv\" (UniqueName: \"kubernetes.io/projected/a331b1f5-54bb-44dc-be63-3b83aee28626-kube-api-access-mbkvv\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.681836 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-catalog-content\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.681889 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-utilities\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.782594 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-utilities\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.782860 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mbkvv\" (UniqueName: \"kubernetes.io/projected/a331b1f5-54bb-44dc-be63-3b83aee28626-kube-api-access-mbkvv\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.782979 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-catalog-content\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.783169 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-utilities\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.783307 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-catalog-content\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.802391 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mbkvv\" (UniqueName: \"kubernetes.io/projected/a331b1f5-54bb-44dc-be63-3b83aee28626-kube-api-access-mbkvv\") pod \"redhat-operators-wcgk2\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:25 crc kubenswrapper[5117]: I0123 09:05:25.909422 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:26 crc kubenswrapper[5117]: I0123 09:05:26.177068 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-wcgk2"] Jan 23 09:05:26 crc kubenswrapper[5117]: I0123 09:05:26.807695 5117 generic.go:358] "Generic (PLEG): container finished" podID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerID="f238ed7b62ac03470c1a487224ac01a9529ed1faed4cff779aca89818f9503c6" exitCode=0 Jan 23 09:05:26 crc kubenswrapper[5117]: I0123 09:05:26.807798 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerDied","Data":"f238ed7b62ac03470c1a487224ac01a9529ed1faed4cff779aca89818f9503c6"} Jan 23 09:05:26 crc kubenswrapper[5117]: I0123 09:05:26.808466 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerStarted","Data":"077a1c1b4bbe75c5f44c62dc9d4e1494ef06ca1a93db2334ec68f25e5a97e1b2"} Jan 23 09:05:27 crc kubenswrapper[5117]: I0123 09:05:27.815157 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerStarted","Data":"8c890037c45ca13fcb9f54e5f00808dfec4ec5737a1b00e1ec08e6338ccb37a3"} Jan 23 09:05:28 crc kubenswrapper[5117]: I0123 09:05:28.822021 5117 generic.go:358] "Generic (PLEG): container finished" podID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerID="8c890037c45ca13fcb9f54e5f00808dfec4ec5737a1b00e1ec08e6338ccb37a3" exitCode=0 Jan 23 09:05:28 crc kubenswrapper[5117]: I0123 09:05:28.822175 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerDied","Data":"8c890037c45ca13fcb9f54e5f00808dfec4ec5737a1b00e1ec08e6338ccb37a3"} Jan 23 09:05:29 crc kubenswrapper[5117]: I0123 09:05:29.829694 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerStarted","Data":"2e45a4ae0d26c2874fa14deab0b11df47e8896b76064453345b31e24566ae501"} Jan 23 09:05:29 crc kubenswrapper[5117]: I0123 09:05:29.850040 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-wcgk2" podStartSLOduration=4.052705011 podStartE2EDuration="4.850024475s" podCreationTimestamp="2026-01-23 09:05:25 +0000 UTC" firstStartedPulling="2026-01-23 09:05:26.809457646 +0000 UTC m=+738.565582712" lastFinishedPulling="2026-01-23 09:05:27.60677714 +0000 UTC m=+739.362902176" observedRunningTime="2026-01-23 09:05:29.847877803 +0000 UTC m=+741.604002829" watchObservedRunningTime="2026-01-23 09:05:29.850024475 +0000 UTC m=+741.606149501" Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.553157 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn"] Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.553759 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="kube-rbac-proxy" containerID="cri-o://b54f8abf9698c7d13f9379cc94bdb4312ebbc6b678c9c1231f59af21b597cd1a" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.553834 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="ovnkube-cluster-manager" containerID="cri-o://2159d23e539f3a8eebf8a266c8a1191c5d9d3a03b3228280c9be06074f58679a" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.788865 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6t5h9"] Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.789718 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-controller" containerID="cri-o://b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.790039 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="northd" containerID="cri-o://f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.790209 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-ovn-metrics" containerID="cri-o://c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.790289 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-node" containerID="cri-o://79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.790319 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="sbdb" containerID="cri-o://694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.790396 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="nbdb" containerID="cri-o://ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.790417 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-acl-logging" containerID="cri-o://6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.818520 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovnkube-controller" containerID="cri-o://c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" gracePeriod=30 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.838017 5117 generic.go:358] "Generic (PLEG): container finished" podID="8d788132-7791-4db1-9057-4112a18f44fa" containerID="2159d23e539f3a8eebf8a266c8a1191c5d9d3a03b3228280c9be06074f58679a" exitCode=0 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.838043 5117 generic.go:358] "Generic (PLEG): container finished" podID="8d788132-7791-4db1-9057-4112a18f44fa" containerID="b54f8abf9698c7d13f9379cc94bdb4312ebbc6b678c9c1231f59af21b597cd1a" exitCode=0 Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.838193 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" event={"ID":"8d788132-7791-4db1-9057-4112a18f44fa","Type":"ContainerDied","Data":"2159d23e539f3a8eebf8a266c8a1191c5d9d3a03b3228280c9be06074f58679a"} Jan 23 09:05:30 crc kubenswrapper[5117]: I0123 09:05:30.838245 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" event={"ID":"8d788132-7791-4db1-9057-4112a18f44fa","Type":"ContainerDied","Data":"b54f8abf9698c7d13f9379cc94bdb4312ebbc6b678c9c1231f59af21b597cd1a"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.257483 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.270312 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-ovnkube-config\") pod \"8d788132-7791-4db1-9057-4112a18f44fa\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.270400 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5wgfg\" (UniqueName: \"kubernetes.io/projected/8d788132-7791-4db1-9057-4112a18f44fa-kube-api-access-5wgfg\") pod \"8d788132-7791-4db1-9057-4112a18f44fa\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.270457 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d788132-7791-4db1-9057-4112a18f44fa-ovn-control-plane-metrics-cert\") pod \"8d788132-7791-4db1-9057-4112a18f44fa\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.270605 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-env-overrides\") pod \"8d788132-7791-4db1-9057-4112a18f44fa\" (UID: \"8d788132-7791-4db1-9057-4112a18f44fa\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.271272 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "8d788132-7791-4db1-9057-4112a18f44fa" (UID: "8d788132-7791-4db1-9057-4112a18f44fa"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.271648 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "8d788132-7791-4db1-9057-4112a18f44fa" (UID: "8d788132-7791-4db1-9057-4112a18f44fa"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.280185 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/8d788132-7791-4db1-9057-4112a18f44fa-ovn-control-plane-metrics-cert" (OuterVolumeSpecName: "ovn-control-plane-metrics-cert") pod "8d788132-7791-4db1-9057-4112a18f44fa" (UID: "8d788132-7791-4db1-9057-4112a18f44fa"). InnerVolumeSpecName "ovn-control-plane-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.280525 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8d788132-7791-4db1-9057-4112a18f44fa-kube-api-access-5wgfg" (OuterVolumeSpecName: "kube-api-access-5wgfg") pod "8d788132-7791-4db1-9057-4112a18f44fa" (UID: "8d788132-7791-4db1-9057-4112a18f44fa"). InnerVolumeSpecName "kube-api-access-5wgfg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.286614 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w"] Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.287445 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="ovnkube-cluster-manager" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.287474 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="ovnkube-cluster-manager" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.287491 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="kube-rbac-proxy" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.287498 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="kube-rbac-proxy" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.287611 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="ovnkube-cluster-manager" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.287629 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="8d788132-7791-4db1-9057-4112a18f44fa" containerName="kube-rbac-proxy" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.291644 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372263 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/569551b5-9f2c-4726-9368-dea57a4063bc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372360 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/569551b5-9f2c-4726-9368-dea57a4063bc-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372432 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vd9rt\" (UniqueName: \"kubernetes.io/projected/569551b5-9f2c-4726-9368-dea57a4063bc-kube-api-access-vd9rt\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372457 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/569551b5-9f2c-4726-9368-dea57a4063bc-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372538 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372582 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/8d788132-7791-4db1-9057-4112a18f44fa-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372594 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5wgfg\" (UniqueName: \"kubernetes.io/projected/8d788132-7791-4db1-9057-4112a18f44fa-kube-api-access-5wgfg\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.372607 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/8d788132-7791-4db1-9057-4112a18f44fa-ovn-control-plane-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.474307 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/569551b5-9f2c-4726-9368-dea57a4063bc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.474378 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/569551b5-9f2c-4726-9368-dea57a4063bc-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.474398 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vd9rt\" (UniqueName: \"kubernetes.io/projected/569551b5-9f2c-4726-9368-dea57a4063bc-kube-api-access-vd9rt\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.474421 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/569551b5-9f2c-4726-9368-dea57a4063bc-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.475266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/569551b5-9f2c-4726-9368-dea57a4063bc-env-overrides\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.475314 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/569551b5-9f2c-4726-9368-dea57a4063bc-ovnkube-config\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.479047 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-control-plane-metrics-cert\" (UniqueName: \"kubernetes.io/secret/569551b5-9f2c-4726-9368-dea57a4063bc-ovn-control-plane-metrics-cert\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.492244 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vd9rt\" (UniqueName: \"kubernetes.io/projected/569551b5-9f2c-4726-9368-dea57a4063bc-kube-api-access-vd9rt\") pod \"ovnkube-control-plane-97c9b6c48-qwh6w\" (UID: \"569551b5-9f2c-4726-9368-dea57a4063bc\") " pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.563308 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6t5h9_d582122d-1bf3-4b38-95a3-a89488b98725/ovn-acl-logging/0.log" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.563723 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6t5h9_d582122d-1bf3-4b38-95a3-a89488b98725/ovn-controller/0.log" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.564169 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575071 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xn4sb\" (UniqueName: \"kubernetes.io/projected/d582122d-1bf3-4b38-95a3-a89488b98725-kube-api-access-xn4sb\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575201 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-systemd\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575277 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-log-socket\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575351 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-script-lib\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575393 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-openvswitch\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575422 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-ovn\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575470 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-log-socket" (OuterVolumeSpecName: "log-socket") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "log-socket". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575504 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-ovn-kubernetes\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575531 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-ovn" (OuterVolumeSpecName: "run-ovn") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "run-ovn". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575524 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-openvswitch" (OuterVolumeSpecName: "run-openvswitch") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "run-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575562 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-ovn-kubernetes" (OuterVolumeSpecName: "host-run-ovn-kubernetes") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-run-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575551 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-etc-openvswitch\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575621 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-etc-openvswitch" (OuterVolumeSpecName: "etc-openvswitch") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "etc-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575643 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-bin\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575669 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-bin" (OuterVolumeSpecName: "host-cni-bin") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-cni-bin". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575688 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-node-log\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575708 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-kubelet\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575739 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-netd\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575766 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-var-lib-openvswitch\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575775 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-node-log" (OuterVolumeSpecName: "node-log") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "node-log". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575800 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-config\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575803 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-kubelet" (OuterVolumeSpecName: "host-kubelet") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-kubelet". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575828 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-netd" (OuterVolumeSpecName: "host-cni-netd") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-cni-netd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575826 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d582122d-1bf3-4b38-95a3-a89488b98725-ovn-node-metrics-cert\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575853 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-var-lib-openvswitch" (OuterVolumeSpecName: "var-lib-openvswitch") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "var-lib-openvswitch". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575876 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-slash\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575929 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-systemd-units\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575958 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-netns\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.575979 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-var-lib-cni-networks-ovn-kubernetes\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576014 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-env-overrides\") pod \"d582122d-1bf3-4b38-95a3-a89488b98725\" (UID: \"d582122d-1bf3-4b38-95a3-a89488b98725\") " Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576423 5117 reconciler_common.go:299] "Volume detached for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-log-socket\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576428 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-config" (OuterVolumeSpecName: "ovnkube-config") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "ovnkube-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576442 5117 reconciler_common.go:299] "Volume detached for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576458 5117 reconciler_common.go:299] "Volume detached for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-ovn\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576470 5117 reconciler_common.go:299] "Volume detached for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576483 5117 reconciler_common.go:299] "Volume detached for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-etc-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576494 5117 reconciler_common.go:299] "Volume detached for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-bin\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576504 5117 reconciler_common.go:299] "Volume detached for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-node-log\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576515 5117 reconciler_common.go:299] "Volume detached for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-kubelet\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576526 5117 reconciler_common.go:299] "Volume detached for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-cni-netd\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576537 5117 reconciler_common.go:299] "Volume detached for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-var-lib-openvswitch\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576468 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-systemd-units" (OuterVolumeSpecName: "systemd-units") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "systemd-units". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576584 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-netns" (OuterVolumeSpecName: "host-run-netns") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-run-netns". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576610 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-var-lib-cni-networks-ovn-kubernetes" (OuterVolumeSpecName: "host-var-lib-cni-networks-ovn-kubernetes") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-var-lib-cni-networks-ovn-kubernetes". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.576982 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-env-overrides" (OuterVolumeSpecName: "env-overrides") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "env-overrides". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.577019 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-slash" (OuterVolumeSpecName: "host-slash") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "host-slash". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.577384 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-script-lib" (OuterVolumeSpecName: "ovnkube-script-lib") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "ovnkube-script-lib". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.580228 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d582122d-1bf3-4b38-95a3-a89488b98725-kube-api-access-xn4sb" (OuterVolumeSpecName: "kube-api-access-xn4sb") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "kube-api-access-xn4sb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.583061 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/d582122d-1bf3-4b38-95a3-a89488b98725-ovn-node-metrics-cert" (OuterVolumeSpecName: "ovn-node-metrics-cert") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "ovn-node-metrics-cert". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.591236 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-systemd" (OuterVolumeSpecName: "run-systemd") pod "d582122d-1bf3-4b38-95a3-a89488b98725" (UID: "d582122d-1bf3-4b38-95a3-a89488b98725"). InnerVolumeSpecName "run-systemd". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.607847 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.616754 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-9tkct"] Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617431 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="sbdb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617457 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="sbdb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617470 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovnkube-controller" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617478 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovnkube-controller" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617492 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kubecfg-setup" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617499 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kubecfg-setup" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617509 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="nbdb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617516 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="nbdb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617529 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-acl-logging" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617536 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-acl-logging" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617544 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-node" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617551 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-node" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617563 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="northd" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617570 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="northd" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617591 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617600 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617611 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-controller" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617618 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-controller" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617725 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="sbdb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617741 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-ovn-metrics" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617754 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovnkube-controller" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617764 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-acl-logging" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617772 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="nbdb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617806 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="kube-rbac-proxy-node" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617818 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="ovn-controller" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.617829 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" containerName="northd" Jan 23 09:05:31 crc kubenswrapper[5117]: W0123 09:05:31.624201 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod569551b5_9f2c_4726_9368_dea57a4063bc.slice/crio-59aa00ab4fed0dc30d6ca1bff7acfde8e4a85637982f59be3e13856e125c9855 WatchSource:0}: Error finding container 59aa00ab4fed0dc30d6ca1bff7acfde8e4a85637982f59be3e13856e125c9855: Status 404 returned error can't find the container with id 59aa00ab4fed0dc30d6ca1bff7acfde8e4a85637982f59be3e13856e125c9855 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.652496 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.677748 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-run-netns\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.677901 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.677990 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-29g6r\" (UniqueName: \"kubernetes.io/projected/1a346f08-b274-481f-b99a-77f61e80cac5-kube-api-access-29g6r\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678163 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-systemd-units\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678273 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-env-overrides\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678344 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-ovn\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678409 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-cni-netd\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678483 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-etc-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678556 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678631 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-var-lib-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678700 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-cni-bin\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678763 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-node-log\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678829 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a346f08-b274-481f-b99a-77f61e80cac5-ovn-node-metrics-cert\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678904 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-systemd\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.678971 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-kubelet\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679036 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679107 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-ovnkube-script-lib\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679289 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-ovnkube-config\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679369 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-slash\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679432 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-log-socket\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679546 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-script-lib\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.679607 5117 reconciler_common.go:299] "Volume detached for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-ovnkube-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680020 5117 reconciler_common.go:299] "Volume detached for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/d582122d-1bf3-4b38-95a3-a89488b98725-ovn-node-metrics-cert\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680161 5117 reconciler_common.go:299] "Volume detached for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-slash\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680292 5117 reconciler_common.go:299] "Volume detached for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-systemd-units\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680432 5117 reconciler_common.go:299] "Volume detached for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-run-netns\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680530 5117 reconciler_common.go:299] "Volume detached for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-host-var-lib-cni-networks-ovn-kubernetes\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680677 5117 reconciler_common.go:299] "Volume detached for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/d582122d-1bf3-4b38-95a3-a89488b98725-env-overrides\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.680839 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xn4sb\" (UniqueName: \"kubernetes.io/projected/d582122d-1bf3-4b38-95a3-a89488b98725-kube-api-access-xn4sb\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.681043 5117 reconciler_common.go:299] "Volume detached for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/d582122d-1bf3-4b38-95a3-a89488b98725-run-systemd\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.782792 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-ovnkube-config\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.782848 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-slash\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.782898 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-log-socket\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.782974 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-slash\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-slash\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783037 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-run-netns\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783047 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"log-socket\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-log-socket\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783085 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783108 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-29g6r\" (UniqueName: \"kubernetes.io/projected/1a346f08-b274-481f-b99a-77f61e80cac5-kube-api-access-29g6r\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783210 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-systemd-units\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783242 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-env-overrides\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783170 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-netns\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-run-netns\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783176 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-run-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-run-ovn-kubernetes\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783311 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"systemd-units\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-systemd-units\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783391 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-ovn\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.783767 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-config\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-ovnkube-config\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785548 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"env-overrides\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-env-overrides\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785600 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-ovn\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-ovn\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785624 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-cni-netd\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785653 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-etc-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785674 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785717 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-var-lib-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785730 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-cni-bin\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785751 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-node-log\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785764 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a346f08-b274-481f-b99a-77f61e80cac5-ovn-node-metrics-cert\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785797 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-systemd\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785820 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-kubelet\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785835 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.785854 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-ovnkube-script-lib\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786400 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovnkube-script-lib\" (UniqueName: \"kubernetes.io/configmap/1a346f08-b274-481f-b99a-77f61e80cac5-ovnkube-script-lib\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786439 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-netd\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-cni-netd\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786461 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"etc-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-etc-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786481 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786500 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"var-lib-openvswitch\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-var-lib-openvswitch\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786524 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-cni-bin\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-cni-bin\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786544 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-log\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-node-log\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.786997 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-kubelet\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-kubelet\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.787058 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"run-systemd\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-run-systemd\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.787088 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"host-var-lib-cni-networks-ovn-kubernetes\" (UniqueName: \"kubernetes.io/host-path/1a346f08-b274-481f-b99a-77f61e80cac5-host-var-lib-cni-networks-ovn-kubernetes\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.798348 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ovn-node-metrics-cert\" (UniqueName: \"kubernetes.io/secret/1a346f08-b274-481f-b99a-77f61e80cac5-ovn-node-metrics-cert\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.804070 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-29g6r\" (UniqueName: \"kubernetes.io/projected/1a346f08-b274-481f-b99a-77f61e80cac5-kube-api-access-29g6r\") pod \"ovnkube-node-9tkct\" (UID: \"1a346f08-b274-481f-b99a-77f61e80cac5\") " pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.844320 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.844395 5117 generic.go:358] "Generic (PLEG): container finished" podID="70f944bb-0390-45c1-914f-5389215db1cd" containerID="2ee96f4d14e54af55c30b158bbb62207aa46b2675985a445b3286d6eef1c6390" exitCode=2 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.844516 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7xdw" event={"ID":"70f944bb-0390-45c1-914f-5389215db1cd","Type":"ContainerDied","Data":"2ee96f4d14e54af55c30b158bbb62207aa46b2675985a445b3286d6eef1c6390"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.845251 5117 scope.go:117] "RemoveContainer" containerID="2ee96f4d14e54af55c30b158bbb62207aa46b2675985a445b3286d6eef1c6390" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.852651 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6t5h9_d582122d-1bf3-4b38-95a3-a89488b98725/ovn-acl-logging/0.log" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853045 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-ovn-kubernetes_ovnkube-node-6t5h9_d582122d-1bf3-4b38-95a3-a89488b98725/ovn-controller/0.log" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853407 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" exitCode=0 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853433 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" exitCode=0 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853442 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" exitCode=0 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853448 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" exitCode=0 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853453 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" exitCode=0 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853458 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" exitCode=0 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853464 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" exitCode=143 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853470 5117 generic.go:358] "Generic (PLEG): container finished" podID="d582122d-1bf3-4b38-95a3-a89488b98725" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" exitCode=143 Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853484 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853530 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853546 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853557 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853568 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853580 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853591 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853601 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853608 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853617 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853627 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853635 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853641 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853646 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853652 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853658 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853664 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853670 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853676 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853685 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853694 5117 scope.go:117] "RemoveContainer" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853696 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853787 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853797 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853803 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853809 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853815 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853820 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853826 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853831 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853842 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" event={"ID":"d582122d-1bf3-4b38-95a3-a89488b98725","Type":"ContainerDied","Data":"87599575c899cc245520f59be906aeb601d24b0ad2b5be3297518f9534812e5d"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853854 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853860 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853865 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853870 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853876 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853883 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853888 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853804 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-6t5h9" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853894 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.853994 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.855351 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" event={"ID":"569551b5-9f2c-4726-9368-dea57a4063bc","Type":"ContainerStarted","Data":"59aa00ab4fed0dc30d6ca1bff7acfde8e4a85637982f59be3e13856e125c9855"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.859994 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" event={"ID":"8d788132-7791-4db1-9057-4112a18f44fa","Type":"ContainerDied","Data":"6d95578c1a46fd52a4661b939852ab04a85fb7501de06ae4b4c69a4e5c6f7906"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.860031 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"2159d23e539f3a8eebf8a266c8a1191c5d9d3a03b3228280c9be06074f58679a"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.860041 5117 pod_container_deletor.go:114] "Failed to issue the request to remove container" containerID={"Type":"cri-o","ID":"b54f8abf9698c7d13f9379cc94bdb4312ebbc6b678c9c1231f59af21b597cd1a"} Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.860118 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.894874 5117 scope.go:117] "RemoveContainer" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.904497 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6t5h9"] Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.912122 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-node-6t5h9"] Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.921025 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn"] Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.924243 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-ovn-kubernetes/ovnkube-control-plane-57b78d8988-997cn"] Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.927355 5117 scope.go:117] "RemoveContainer" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.954319 5117 scope.go:117] "RemoveContainer" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.966386 5117 scope.go:117] "RemoveContainer" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.978167 5117 scope.go:117] "RemoveContainer" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:31 crc kubenswrapper[5117]: I0123 09:05:31.989633 5117 scope.go:117] "RemoveContainer" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.002463 5117 scope.go:117] "RemoveContainer" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.010894 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.017213 5117 scope.go:117] "RemoveContainer" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.057411 5117 scope.go:117] "RemoveContainer" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.057873 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": container with ID starting with c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151 not found: ID does not exist" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.057922 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} err="failed to get container status \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": rpc error: code = NotFound desc = could not find container \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": container with ID starting with c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.057955 5117 scope.go:117] "RemoveContainer" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.058366 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": container with ID starting with 694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb not found: ID does not exist" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.058427 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} err="failed to get container status \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": rpc error: code = NotFound desc = could not find container \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": container with ID starting with 694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.058462 5117 scope.go:117] "RemoveContainer" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.058779 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": container with ID starting with ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7 not found: ID does not exist" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.058826 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} err="failed to get container status \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": rpc error: code = NotFound desc = could not find container \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": container with ID starting with ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.058857 5117 scope.go:117] "RemoveContainer" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.059235 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": container with ID starting with f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea not found: ID does not exist" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.059276 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} err="failed to get container status \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": rpc error: code = NotFound desc = could not find container \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": container with ID starting with f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.059294 5117 scope.go:117] "RemoveContainer" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.059514 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": container with ID starting with c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780 not found: ID does not exist" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.059550 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} err="failed to get container status \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": rpc error: code = NotFound desc = could not find container \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": container with ID starting with c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.059569 5117 scope.go:117] "RemoveContainer" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.059792 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": container with ID starting with 79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2 not found: ID does not exist" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.059973 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} err="failed to get container status \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": rpc error: code = NotFound desc = could not find container \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": container with ID starting with 79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.059995 5117 scope.go:117] "RemoveContainer" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.060254 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": container with ID starting with 6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b not found: ID does not exist" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.060279 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} err="failed to get container status \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": rpc error: code = NotFound desc = could not find container \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": container with ID starting with 6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.060295 5117 scope.go:117] "RemoveContainer" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.060635 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": container with ID starting with b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5 not found: ID does not exist" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.060677 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} err="failed to get container status \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": rpc error: code = NotFound desc = could not find container \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": container with ID starting with b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.060694 5117 scope.go:117] "RemoveContainer" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" Jan 23 09:05:32 crc kubenswrapper[5117]: E0123 09:05:32.060964 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": container with ID starting with d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8 not found: ID does not exist" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.060994 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} err="failed to get container status \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": rpc error: code = NotFound desc = could not find container \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": container with ID starting with d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061013 5117 scope.go:117] "RemoveContainer" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061290 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} err="failed to get container status \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": rpc error: code = NotFound desc = could not find container \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": container with ID starting with c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061320 5117 scope.go:117] "RemoveContainer" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061523 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} err="failed to get container status \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": rpc error: code = NotFound desc = could not find container \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": container with ID starting with 694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061553 5117 scope.go:117] "RemoveContainer" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061846 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} err="failed to get container status \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": rpc error: code = NotFound desc = could not find container \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": container with ID starting with ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.061865 5117 scope.go:117] "RemoveContainer" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062184 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} err="failed to get container status \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": rpc error: code = NotFound desc = could not find container \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": container with ID starting with f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062221 5117 scope.go:117] "RemoveContainer" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062455 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} err="failed to get container status \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": rpc error: code = NotFound desc = could not find container \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": container with ID starting with c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062478 5117 scope.go:117] "RemoveContainer" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062701 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} err="failed to get container status \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": rpc error: code = NotFound desc = could not find container \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": container with ID starting with 79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062723 5117 scope.go:117] "RemoveContainer" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062928 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} err="failed to get container status \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": rpc error: code = NotFound desc = could not find container \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": container with ID starting with 6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.062956 5117 scope.go:117] "RemoveContainer" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063155 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} err="failed to get container status \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": rpc error: code = NotFound desc = could not find container \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": container with ID starting with b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063174 5117 scope.go:117] "RemoveContainer" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063351 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} err="failed to get container status \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": rpc error: code = NotFound desc = could not find container \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": container with ID starting with d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063377 5117 scope.go:117] "RemoveContainer" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063550 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} err="failed to get container status \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": rpc error: code = NotFound desc = could not find container \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": container with ID starting with c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063569 5117 scope.go:117] "RemoveContainer" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063723 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} err="failed to get container status \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": rpc error: code = NotFound desc = could not find container \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": container with ID starting with 694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.063745 5117 scope.go:117] "RemoveContainer" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.064180 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} err="failed to get container status \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": rpc error: code = NotFound desc = could not find container \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": container with ID starting with ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.064477 5117 scope.go:117] "RemoveContainer" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.064695 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} err="failed to get container status \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": rpc error: code = NotFound desc = could not find container \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": container with ID starting with f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.064714 5117 scope.go:117] "RemoveContainer" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.064952 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} err="failed to get container status \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": rpc error: code = NotFound desc = could not find container \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": container with ID starting with c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.064969 5117 scope.go:117] "RemoveContainer" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065202 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} err="failed to get container status \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": rpc error: code = NotFound desc = could not find container \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": container with ID starting with 79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065225 5117 scope.go:117] "RemoveContainer" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065388 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} err="failed to get container status \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": rpc error: code = NotFound desc = could not find container \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": container with ID starting with 6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065406 5117 scope.go:117] "RemoveContainer" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065576 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} err="failed to get container status \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": rpc error: code = NotFound desc = could not find container \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": container with ID starting with b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065603 5117 scope.go:117] "RemoveContainer" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065791 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} err="failed to get container status \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": rpc error: code = NotFound desc = could not find container \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": container with ID starting with d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065811 5117 scope.go:117] "RemoveContainer" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.065992 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} err="failed to get container status \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": rpc error: code = NotFound desc = could not find container \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": container with ID starting with c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066009 5117 scope.go:117] "RemoveContainer" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066195 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} err="failed to get container status \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": rpc error: code = NotFound desc = could not find container \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": container with ID starting with 694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066215 5117 scope.go:117] "RemoveContainer" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066393 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} err="failed to get container status \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": rpc error: code = NotFound desc = could not find container \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": container with ID starting with ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066410 5117 scope.go:117] "RemoveContainer" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066590 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} err="failed to get container status \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": rpc error: code = NotFound desc = could not find container \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": container with ID starting with f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066610 5117 scope.go:117] "RemoveContainer" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066800 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} err="failed to get container status \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": rpc error: code = NotFound desc = could not find container \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": container with ID starting with c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.066820 5117 scope.go:117] "RemoveContainer" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067000 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} err="failed to get container status \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": rpc error: code = NotFound desc = could not find container \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": container with ID starting with 79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067019 5117 scope.go:117] "RemoveContainer" containerID="6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067256 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b"} err="failed to get container status \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": rpc error: code = NotFound desc = could not find container \"6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b\": container with ID starting with 6b3b96dc5eacde6d27f7441794cfb50b2d9f4dfa1d3cf02716093e719427334b not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067274 5117 scope.go:117] "RemoveContainer" containerID="b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067476 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5"} err="failed to get container status \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": rpc error: code = NotFound desc = could not find container \"b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5\": container with ID starting with b24a2a0f27199e5a9b1dd12eac744b81f8902fd156fe81759bceb46d8c06c8e5 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067500 5117 scope.go:117] "RemoveContainer" containerID="d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067672 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8"} err="failed to get container status \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": rpc error: code = NotFound desc = could not find container \"d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8\": container with ID starting with d31133b1b55cc2bb46ede1b42912e7cd4009e8af21905b7333ac534c282482f8 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067692 5117 scope.go:117] "RemoveContainer" containerID="c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067856 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151"} err="failed to get container status \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": rpc error: code = NotFound desc = could not find container \"c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151\": container with ID starting with c278aa0a194a3b493826a04e8e6b80bd98800a8063d362b74a2dba73903e8151 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.067884 5117 scope.go:117] "RemoveContainer" containerID="694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068062 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb"} err="failed to get container status \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": rpc error: code = NotFound desc = could not find container \"694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb\": container with ID starting with 694428efaedbf6c33259ab1322e80e8435e3dd251291b0591fdb56a917fb12eb not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068080 5117 scope.go:117] "RemoveContainer" containerID="ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068318 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7"} err="failed to get container status \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": rpc error: code = NotFound desc = could not find container \"ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7\": container with ID starting with ef8769c4d7178cb37614fdd43f375fd4a626f414ca76e4d7b2017bff74b3c1e7 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068352 5117 scope.go:117] "RemoveContainer" containerID="f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068538 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea"} err="failed to get container status \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": rpc error: code = NotFound desc = could not find container \"f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea\": container with ID starting with f37505bcbf21696f86f56bc0ca90a21a1b9e98e267cd64b4d3bcef865fd1b0ea not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068555 5117 scope.go:117] "RemoveContainer" containerID="c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068708 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780"} err="failed to get container status \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": rpc error: code = NotFound desc = could not find container \"c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780\": container with ID starting with c81c496403f19f5339123a71c374396f572fd36516a6b61922cb73d864617780 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068728 5117 scope.go:117] "RemoveContainer" containerID="79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.068897 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2"} err="failed to get container status \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": rpc error: code = NotFound desc = could not find container \"79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2\": container with ID starting with 79fa53c41eb53de5157db3f3ab772d067c2d34ef4ad9577f85cbb78cc94d25c2 not found: ID does not exist" Jan 23 09:05:32 crc kubenswrapper[5117]: W0123 09:05:32.072147 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod1a346f08_b274_481f_b99a_77f61e80cac5.slice/crio-6f0c3c720040b086f2e1c8726a20f6297832aaffadf9a79c052f39b697c5bca9 WatchSource:0}: Error finding container 6f0c3c720040b086f2e1c8726a20f6297832aaffadf9a79c052f39b697c5bca9: Status 404 returned error can't find the container with id 6f0c3c720040b086f2e1c8726a20f6297832aaffadf9a79c052f39b697c5bca9 Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.778926 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="8d788132-7791-4db1-9057-4112a18f44fa" path="/var/lib/kubelet/pods/8d788132-7791-4db1-9057-4112a18f44fa/volumes" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.779521 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d582122d-1bf3-4b38-95a3-a89488b98725" path="/var/lib/kubelet/pods/d582122d-1bf3-4b38-95a3-a89488b98725/volumes" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.870396 5117 generic.go:358] "Generic (PLEG): container finished" podID="1a346f08-b274-481f-b99a-77f61e80cac5" containerID="dede3e1f8ad15e9e7e49dce3085de827ce03475ac61e4d0054fb3cc17fe8ab49" exitCode=0 Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.870526 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerDied","Data":"dede3e1f8ad15e9e7e49dce3085de827ce03475ac61e4d0054fb3cc17fe8ab49"} Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.870587 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"6f0c3c720040b086f2e1c8726a20f6297832aaffadf9a79c052f39b697c5bca9"} Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.873330 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" event={"ID":"569551b5-9f2c-4726-9368-dea57a4063bc","Type":"ContainerStarted","Data":"9f9e2461b1f74ba80e2fea711a1ef5cf9247ae552b67b5e099a3634b12cb24a5"} Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.873395 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" event={"ID":"569551b5-9f2c-4726-9368-dea57a4063bc","Type":"ContainerStarted","Data":"356b3f322964fb180dc837c2bf8cd6443afe5585f7a1fa656cb6e8f4ca579c0a"} Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.876482 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.876651 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-multus/multus-g7xdw" event={"ID":"70f944bb-0390-45c1-914f-5389215db1cd","Type":"ContainerStarted","Data":"d5e2bb5e1ae867c3d66e29597f71e664b2aa42374e4080be9ca3053b0253999b"} Jan 23 09:05:32 crc kubenswrapper[5117]: I0123 09:05:32.946160 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-control-plane-97c9b6c48-qwh6w" podStartSLOduration=2.946143094 podStartE2EDuration="2.946143094s" podCreationTimestamp="2026-01-23 09:05:30 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:05:32.943645732 +0000 UTC m=+744.699770758" watchObservedRunningTime="2026-01-23 09:05:32.946143094 +0000 UTC m=+744.702268120" Jan 23 09:05:33 crc kubenswrapper[5117]: I0123 09:05:33.888001 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"6509b74e0cfce7eb64228f9916b1f762a0ba1bd80331c238ba407e2681434b28"} Jan 23 09:05:33 crc kubenswrapper[5117]: I0123 09:05:33.888300 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"9d9965ae95479c7bfa399ca7e520ffb3d6430c8c96b39a78a5cec433b0747154"} Jan 23 09:05:34 crc kubenswrapper[5117]: I0123 09:05:34.897991 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"74c4da2a9b5fb6e69ac77f41ce335d5355e68a65a1ae95c5bcf80f73eea702ab"} Jan 23 09:05:34 crc kubenswrapper[5117]: I0123 09:05:34.898033 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"6def48887a6ba79ff930db46f15b2507959ac9add2bd52afc811c704afca5688"} Jan 23 09:05:34 crc kubenswrapper[5117]: I0123 09:05:34.898045 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"8b3065c6fceb49586cf3913c554fc28acd15279e810c232bdb40ac12650c734e"} Jan 23 09:05:34 crc kubenswrapper[5117]: I0123 09:05:34.898058 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"07d6ab55c4b97cdca44b5b5b65b1a189425b8e8aa7177fb870eb12592de97106"} Jan 23 09:05:35 crc kubenswrapper[5117]: I0123 09:05:35.910917 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:35 crc kubenswrapper[5117]: I0123 09:05:35.911524 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:35 crc kubenswrapper[5117]: I0123 09:05:35.958993 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:36 crc kubenswrapper[5117]: I0123 09:05:36.916606 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"4f3f7f1e883f7d75c869807617267f148f2cf1414a9c91b2db27c0b011caa8f3"} Jan 23 09:05:36 crc kubenswrapper[5117]: I0123 09:05:36.957182 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:36 crc kubenswrapper[5117]: I0123 09:05:36.992834 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wcgk2"] Jan 23 09:05:38 crc kubenswrapper[5117]: I0123 09:05:38.606831 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-sdtqw"] Jan 23 09:05:38 crc kubenswrapper[5117]: I0123 09:05:38.934357 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:38 crc kubenswrapper[5117]: I0123 09:05:38.935072 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-wcgk2" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="registry-server" containerID="cri-o://2e45a4ae0d26c2874fa14deab0b11df47e8896b76064453345b31e24566ae501" gracePeriod=2 Jan 23 09:05:38 crc kubenswrapper[5117]: I0123 09:05:38.989643 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-s27w2\" (UniqueName: \"kubernetes.io/projected/ce4bff21-0202-4e22-b701-98fd01cbdd01-kube-api-access-s27w2\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:38 crc kubenswrapper[5117]: I0123 09:05:38.989696 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-catalog-content\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:38 crc kubenswrapper[5117]: I0123 09:05:38.989748 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-utilities\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.091485 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-utilities\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.091578 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-s27w2\" (UniqueName: \"kubernetes.io/projected/ce4bff21-0202-4e22-b701-98fd01cbdd01-kube-api-access-s27w2\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.091628 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-catalog-content\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.092716 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-utilities\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.092904 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-catalog-content\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.121126 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-s27w2\" (UniqueName: \"kubernetes.io/projected/ce4bff21-0202-4e22-b701-98fd01cbdd01-kube-api-access-s27w2\") pod \"certified-operators-sdtqw\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.304808 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.327487 5117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(d82db2e5949e02055e9f1af5c2f97184e3fd9c647c0f6e397259b3e3562f58be): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.327553 5117 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(d82db2e5949e02055e9f1af5c2f97184e3fd9c647c0f6e397259b3e3562f58be): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.327575 5117 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(d82db2e5949e02055e9f1af5c2f97184e3fd9c647c0f6e397259b3e3562f58be): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.327628 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-sdtqw_openshift-marketplace(ce4bff21-0202-4e22-b701-98fd01cbdd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-sdtqw_openshift-marketplace(ce4bff21-0202-4e22-b701-98fd01cbdd01)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(d82db2e5949e02055e9f1af5c2f97184e3fd9c647c0f6e397259b3e3562f58be): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-sdtqw" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.852597 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sdtqw"] Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.934179 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" event={"ID":"1a346f08-b274-481f-b99a-77f61e80cac5","Type":"ContainerStarted","Data":"c888004fe623efd5766d687054764e19248d6c2a74f8bacdc25503c67e9fc178"} Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.934212 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: I0123 09:05:39.934835 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.969703 5117 log.go:32] "RunPodSandbox from runtime service failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(ac453762eddaa0657d87475dbd0caa7f3614871326b9c13ec09fad13d7506bd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.969767 5117 kuberuntime_sandbox.go:70] "Failed to create sandbox for pod" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(ac453762eddaa0657d87475dbd0caa7f3614871326b9c13ec09fad13d7506bd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.969788 5117 kuberuntime_manager.go:1252] "CreatePodSandbox for pod failed" err="rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(ac453762eddaa0657d87475dbd0caa7f3614871326b9c13ec09fad13d7506bd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:39 crc kubenswrapper[5117]: E0123 09:05:39.969840 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"CreatePodSandbox\" for \"certified-operators-sdtqw_openshift-marketplace(ce4bff21-0202-4e22-b701-98fd01cbdd01)\" with CreatePodSandboxError: \"Failed to create sandbox for pod \\\"certified-operators-sdtqw_openshift-marketplace(ce4bff21-0202-4e22-b701-98fd01cbdd01)\\\": rpc error: code = Unknown desc = failed to create pod network sandbox k8s_certified-operators-sdtqw_openshift-marketplace_ce4bff21-0202-4e22-b701-98fd01cbdd01_0(ac453762eddaa0657d87475dbd0caa7f3614871326b9c13ec09fad13d7506bd7): no CNI configuration file in /etc/kubernetes/cni/net.d/. Has your network provider started?\"" pod="openshift-marketplace/certified-operators-sdtqw" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.943684 5117 generic.go:358] "Generic (PLEG): container finished" podID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerID="2e45a4ae0d26c2874fa14deab0b11df47e8896b76064453345b31e24566ae501" exitCode=0 Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.943773 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerDied","Data":"2e45a4ae0d26c2874fa14deab0b11df47e8896b76064453345b31e24566ae501"} Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.946210 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.946256 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.946266 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.971080 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.973386 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:05:40 crc kubenswrapper[5117]: I0123 09:05:40.998709 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" podStartSLOduration=9.998687865 podStartE2EDuration="9.998687865s" podCreationTimestamp="2026-01-23 09:05:31 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:05:39.966426163 +0000 UTC m=+751.722551189" watchObservedRunningTime="2026-01-23 09:05:40.998687865 +0000 UTC m=+752.754812911" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.674660 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.825644 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mbkvv\" (UniqueName: \"kubernetes.io/projected/a331b1f5-54bb-44dc-be63-3b83aee28626-kube-api-access-mbkvv\") pod \"a331b1f5-54bb-44dc-be63-3b83aee28626\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.825729 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-catalog-content\") pod \"a331b1f5-54bb-44dc-be63-3b83aee28626\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.825750 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-utilities\") pod \"a331b1f5-54bb-44dc-be63-3b83aee28626\" (UID: \"a331b1f5-54bb-44dc-be63-3b83aee28626\") " Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.826940 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-utilities" (OuterVolumeSpecName: "utilities") pod "a331b1f5-54bb-44dc-be63-3b83aee28626" (UID: "a331b1f5-54bb-44dc-be63-3b83aee28626"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.839915 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a331b1f5-54bb-44dc-be63-3b83aee28626-kube-api-access-mbkvv" (OuterVolumeSpecName: "kube-api-access-mbkvv") pod "a331b1f5-54bb-44dc-be63-3b83aee28626" (UID: "a331b1f5-54bb-44dc-be63-3b83aee28626"). InnerVolumeSpecName "kube-api-access-mbkvv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.927525 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.927566 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mbkvv\" (UniqueName: \"kubernetes.io/projected/a331b1f5-54bb-44dc-be63-3b83aee28626-kube-api-access-mbkvv\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.943177 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a331b1f5-54bb-44dc-be63-3b83aee28626" (UID: "a331b1f5-54bb-44dc-be63-3b83aee28626"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.951717 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-wcgk2" event={"ID":"a331b1f5-54bb-44dc-be63-3b83aee28626","Type":"ContainerDied","Data":"077a1c1b4bbe75c5f44c62dc9d4e1494ef06ca1a93db2334ec68f25e5a97e1b2"} Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.951815 5117 scope.go:117] "RemoveContainer" containerID="2e45a4ae0d26c2874fa14deab0b11df47e8896b76064453345b31e24566ae501" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.952292 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-wcgk2" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.972906 5117 scope.go:117] "RemoveContainer" containerID="8c890037c45ca13fcb9f54e5f00808dfec4ec5737a1b00e1ec08e6338ccb37a3" Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.990284 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-wcgk2"] Jan 23 09:05:41 crc kubenswrapper[5117]: I0123 09:05:41.993384 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-wcgk2"] Jan 23 09:05:42 crc kubenswrapper[5117]: I0123 09:05:42.015555 5117 scope.go:117] "RemoveContainer" containerID="f238ed7b62ac03470c1a487224ac01a9529ed1faed4cff779aca89818f9503c6" Jan 23 09:05:42 crc kubenswrapper[5117]: I0123 09:05:42.029048 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a331b1f5-54bb-44dc-be63-3b83aee28626-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:05:42 crc kubenswrapper[5117]: I0123 09:05:42.776199 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" path="/var/lib/kubelet/pods/a331b1f5-54bb-44dc-be63-3b83aee28626/volumes" Jan 23 09:05:45 crc kubenswrapper[5117]: I0123 09:05:45.062973 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:05:45 crc kubenswrapper[5117]: I0123 09:05:45.063402 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:05:55 crc kubenswrapper[5117]: I0123 09:05:55.769903 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:55 crc kubenswrapper[5117]: I0123 09:05:55.770809 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:56 crc kubenswrapper[5117]: I0123 09:05:56.011232 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-sdtqw"] Jan 23 09:05:56 crc kubenswrapper[5117]: I0123 09:05:56.061468 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdtqw" event={"ID":"ce4bff21-0202-4e22-b701-98fd01cbdd01","Type":"ContainerStarted","Data":"2327d656ea8af64e4ecd80a5cc8609145859ba0f90abfb529c76eddac06421aa"} Jan 23 09:05:57 crc kubenswrapper[5117]: I0123 09:05:57.069818 5117 generic.go:358] "Generic (PLEG): container finished" podID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerID="5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3" exitCode=0 Jan 23 09:05:57 crc kubenswrapper[5117]: I0123 09:05:57.070010 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdtqw" event={"ID":"ce4bff21-0202-4e22-b701-98fd01cbdd01","Type":"ContainerDied","Data":"5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3"} Jan 23 09:05:58 crc kubenswrapper[5117]: I0123 09:05:58.076175 5117 generic.go:358] "Generic (PLEG): container finished" podID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerID="f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66" exitCode=0 Jan 23 09:05:58 crc kubenswrapper[5117]: I0123 09:05:58.076250 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdtqw" event={"ID":"ce4bff21-0202-4e22-b701-98fd01cbdd01","Type":"ContainerDied","Data":"f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66"} Jan 23 09:05:59 crc kubenswrapper[5117]: I0123 09:05:59.083041 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdtqw" event={"ID":"ce4bff21-0202-4e22-b701-98fd01cbdd01","Type":"ContainerStarted","Data":"baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df"} Jan 23 09:05:59 crc kubenswrapper[5117]: I0123 09:05:59.102034 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-sdtqw" podStartSLOduration=20.399658816 podStartE2EDuration="21.102015753s" podCreationTimestamp="2026-01-23 09:05:38 +0000 UTC" firstStartedPulling="2026-01-23 09:05:57.07156459 +0000 UTC m=+768.827689656" lastFinishedPulling="2026-01-23 09:05:57.773921567 +0000 UTC m=+769.530046593" observedRunningTime="2026-01-23 09:05:59.097884214 +0000 UTC m=+770.854009240" watchObservedRunningTime="2026-01-23 09:05:59.102015753 +0000 UTC m=+770.858140779" Jan 23 09:05:59 crc kubenswrapper[5117]: I0123 09:05:59.305173 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:05:59 crc kubenswrapper[5117]: I0123 09:05:59.305491 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.127796 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485986-g7fxs"] Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129184 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="extract-content" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129218 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="extract-content" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129266 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="registry-server" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129278 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="registry-server" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129296 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="extract-utilities" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129306 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="extract-utilities" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.129454 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="a331b1f5-54bb-44dc-be63-3b83aee28626" containerName="registry-server" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.138101 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485986-g7fxs"] Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.138230 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.140874 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.140988 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.141012 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.178344 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5hwvl\" (UniqueName: \"kubernetes.io/projected/27b2bd0c-1d1b-43cb-8111-fadf43f06788-kube-api-access-5hwvl\") pod \"auto-csr-approver-29485986-g7fxs\" (UID: \"27b2bd0c-1d1b-43cb-8111-fadf43f06788\") " pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.279556 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5hwvl\" (UniqueName: \"kubernetes.io/projected/27b2bd0c-1d1b-43cb-8111-fadf43f06788-kube-api-access-5hwvl\") pod \"auto-csr-approver-29485986-g7fxs\" (UID: \"27b2bd0c-1d1b-43cb-8111-fadf43f06788\") " pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.298936 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5hwvl\" (UniqueName: \"kubernetes.io/projected/27b2bd0c-1d1b-43cb-8111-fadf43f06788-kube-api-access-5hwvl\") pod \"auto-csr-approver-29485986-g7fxs\" (UID: \"27b2bd0c-1d1b-43cb-8111-fadf43f06788\") " pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.340397 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/certified-operators-sdtqw" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="registry-server" probeResult="failure" output=< Jan 23 09:06:00 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 23 09:06:00 crc kubenswrapper[5117]: > Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.452683 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:00 crc kubenswrapper[5117]: I0123 09:06:00.842431 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485986-g7fxs"] Jan 23 09:06:01 crc kubenswrapper[5117]: I0123 09:06:01.094592 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" event={"ID":"27b2bd0c-1d1b-43cb-8111-fadf43f06788","Type":"ContainerStarted","Data":"7dafc321765828efb96117f4b29e3393c5e32bc475f52a1999f93082edf33cbb"} Jan 23 09:06:02 crc kubenswrapper[5117]: I0123 09:06:02.100659 5117 generic.go:358] "Generic (PLEG): container finished" podID="27b2bd0c-1d1b-43cb-8111-fadf43f06788" containerID="7b5024136536b820c7e70cbd9b1d9916000f3cc972bfa615fd602a422cd98e16" exitCode=0 Jan 23 09:06:02 crc kubenswrapper[5117]: I0123 09:06:02.100716 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" event={"ID":"27b2bd0c-1d1b-43cb-8111-fadf43f06788","Type":"ContainerDied","Data":"7b5024136536b820c7e70cbd9b1d9916000f3cc972bfa615fd602a422cd98e16"} Jan 23 09:06:03 crc kubenswrapper[5117]: I0123 09:06:03.325172 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:03 crc kubenswrapper[5117]: I0123 09:06:03.415604 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5hwvl\" (UniqueName: \"kubernetes.io/projected/27b2bd0c-1d1b-43cb-8111-fadf43f06788-kube-api-access-5hwvl\") pod \"27b2bd0c-1d1b-43cb-8111-fadf43f06788\" (UID: \"27b2bd0c-1d1b-43cb-8111-fadf43f06788\") " Jan 23 09:06:03 crc kubenswrapper[5117]: I0123 09:06:03.421934 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/27b2bd0c-1d1b-43cb-8111-fadf43f06788-kube-api-access-5hwvl" (OuterVolumeSpecName: "kube-api-access-5hwvl") pod "27b2bd0c-1d1b-43cb-8111-fadf43f06788" (UID: "27b2bd0c-1d1b-43cb-8111-fadf43f06788"). InnerVolumeSpecName "kube-api-access-5hwvl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:03 crc kubenswrapper[5117]: I0123 09:06:03.516801 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5hwvl\" (UniqueName: \"kubernetes.io/projected/27b2bd0c-1d1b-43cb-8111-fadf43f06788-kube-api-access-5hwvl\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:04 crc kubenswrapper[5117]: I0123 09:06:04.113801 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" event={"ID":"27b2bd0c-1d1b-43cb-8111-fadf43f06788","Type":"ContainerDied","Data":"7dafc321765828efb96117f4b29e3393c5e32bc475f52a1999f93082edf33cbb"} Jan 23 09:06:04 crc kubenswrapper[5117]: I0123 09:06:04.113849 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7dafc321765828efb96117f4b29e3393c5e32bc475f52a1999f93082edf33cbb" Jan 23 09:06:04 crc kubenswrapper[5117]: I0123 09:06:04.113922 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485986-g7fxs" Jan 23 09:06:09 crc kubenswrapper[5117]: I0123 09:06:09.004676 5117 scope.go:117] "RemoveContainer" containerID="2159d23e539f3a8eebf8a266c8a1191c5d9d3a03b3228280c9be06074f58679a" Jan 23 09:06:09 crc kubenswrapper[5117]: I0123 09:06:09.029778 5117 scope.go:117] "RemoveContainer" containerID="b54f8abf9698c7d13f9379cc94bdb4312ebbc6b678c9c1231f59af21b597cd1a" Jan 23 09:06:09 crc kubenswrapper[5117]: I0123 09:06:09.340341 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:06:09 crc kubenswrapper[5117]: I0123 09:06:09.383671 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:06:09 crc kubenswrapper[5117]: I0123 09:06:09.809791 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sdtqw"] Jan 23 09:06:11 crc kubenswrapper[5117]: I0123 09:06:11.153809 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-sdtqw" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="registry-server" containerID="cri-o://baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df" gracePeriod=2 Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.058446 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.146845 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-utilities\") pod \"ce4bff21-0202-4e22-b701-98fd01cbdd01\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.146964 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-s27w2\" (UniqueName: \"kubernetes.io/projected/ce4bff21-0202-4e22-b701-98fd01cbdd01-kube-api-access-s27w2\") pod \"ce4bff21-0202-4e22-b701-98fd01cbdd01\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.147021 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-catalog-content\") pod \"ce4bff21-0202-4e22-b701-98fd01cbdd01\" (UID: \"ce4bff21-0202-4e22-b701-98fd01cbdd01\") " Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.148223 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-utilities" (OuterVolumeSpecName: "utilities") pod "ce4bff21-0202-4e22-b701-98fd01cbdd01" (UID: "ce4bff21-0202-4e22-b701-98fd01cbdd01"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.157403 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/ce4bff21-0202-4e22-b701-98fd01cbdd01-kube-api-access-s27w2" (OuterVolumeSpecName: "kube-api-access-s27w2") pod "ce4bff21-0202-4e22-b701-98fd01cbdd01" (UID: "ce4bff21-0202-4e22-b701-98fd01cbdd01"). InnerVolumeSpecName "kube-api-access-s27w2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.162071 5117 generic.go:358] "Generic (PLEG): container finished" podID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerID="baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df" exitCode=0 Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.162115 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdtqw" event={"ID":"ce4bff21-0202-4e22-b701-98fd01cbdd01","Type":"ContainerDied","Data":"baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df"} Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.162162 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-sdtqw" event={"ID":"ce4bff21-0202-4e22-b701-98fd01cbdd01","Type":"ContainerDied","Data":"2327d656ea8af64e4ecd80a5cc8609145859ba0f90abfb529c76eddac06421aa"} Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.162187 5117 scope.go:117] "RemoveContainer" containerID="baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.162342 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-sdtqw" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.180330 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "ce4bff21-0202-4e22-b701-98fd01cbdd01" (UID: "ce4bff21-0202-4e22-b701-98fd01cbdd01"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.186412 5117 scope.go:117] "RemoveContainer" containerID="f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.201395 5117 scope.go:117] "RemoveContainer" containerID="5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.224067 5117 scope.go:117] "RemoveContainer" containerID="baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df" Jan 23 09:06:12 crc kubenswrapper[5117]: E0123 09:06:12.224473 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df\": container with ID starting with baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df not found: ID does not exist" containerID="baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.224633 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df"} err="failed to get container status \"baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df\": rpc error: code = NotFound desc = could not find container \"baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df\": container with ID starting with baba8ab82c86f064ce3ebd7f04b16f993caa43641f50708ba6def9921589f1df not found: ID does not exist" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.224665 5117 scope.go:117] "RemoveContainer" containerID="f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66" Jan 23 09:06:12 crc kubenswrapper[5117]: E0123 09:06:12.225189 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66\": container with ID starting with f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66 not found: ID does not exist" containerID="f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.225231 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66"} err="failed to get container status \"f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66\": rpc error: code = NotFound desc = could not find container \"f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66\": container with ID starting with f3530f4f52ab0a74c7c5e4e27ba86514b55a95bcb15e53ebb5b07009964a7e66 not found: ID does not exist" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.225252 5117 scope.go:117] "RemoveContainer" containerID="5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3" Jan 23 09:06:12 crc kubenswrapper[5117]: E0123 09:06:12.225518 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3\": container with ID starting with 5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3 not found: ID does not exist" containerID="5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.225544 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3"} err="failed to get container status \"5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3\": rpc error: code = NotFound desc = could not find container \"5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3\": container with ID starting with 5a09243e9a4ffa404b1e1f1bcabb3ad8b5aa4de104be2c13ad0ff9d402d35ac3 not found: ID does not exist" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.248681 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.248735 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/ce4bff21-0202-4e22-b701-98fd01cbdd01-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.248748 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-s27w2\" (UniqueName: \"kubernetes.io/projected/ce4bff21-0202-4e22-b701-98fd01cbdd01-kube-api-access-s27w2\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.488234 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-sdtqw"] Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.506170 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-sdtqw"] Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.803606 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" path="/var/lib/kubelet/pods/ce4bff21-0202-4e22-b701-98fd01cbdd01/volumes" Jan 23 09:06:12 crc kubenswrapper[5117]: I0123 09:06:12.987618 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-ovn-kubernetes/ovnkube-node-9tkct" Jan 23 09:06:15 crc kubenswrapper[5117]: I0123 09:06:15.063779 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:06:15 crc kubenswrapper[5117]: I0123 09:06:15.064482 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.408683 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-marketplace-j48h8"] Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414009 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="registry-server" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414049 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="registry-server" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414079 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="extract-utilities" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414095 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="extract-utilities" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414206 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="27b2bd0c-1d1b-43cb-8111-fadf43f06788" containerName="oc" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414223 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="27b2bd0c-1d1b-43cb-8111-fadf43f06788" containerName="oc" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414243 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="extract-content" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414255 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="extract-content" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414437 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="27b2bd0c-1d1b-43cb-8111-fadf43f06788" containerName="oc" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.414461 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="ce4bff21-0202-4e22-b701-98fd01cbdd01" containerName="registry-server" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.430353 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48h8"] Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.430592 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.515630 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jff74\" (UniqueName: \"kubernetes.io/projected/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-kube-api-access-jff74\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.515715 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-utilities\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.515804 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-catalog-content\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.617457 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-utilities\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.617501 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-catalog-content\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.617997 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-utilities\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.618027 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-catalog-content\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.618138 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jff74\" (UniqueName: \"kubernetes.io/projected/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-kube-api-access-jff74\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.644830 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jff74\" (UniqueName: \"kubernetes.io/projected/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-kube-api-access-jff74\") pod \"redhat-marketplace-j48h8\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:24 crc kubenswrapper[5117]: I0123 09:06:24.761435 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:25 crc kubenswrapper[5117]: I0123 09:06:25.198508 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48h8"] Jan 23 09:06:25 crc kubenswrapper[5117]: I0123 09:06:25.247293 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48h8" event={"ID":"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e","Type":"ContainerStarted","Data":"692fa40f60b9128460201b3a8efb9d15e42e1ee391d37f830f06ff090b9eebff"} Jan 23 09:06:26 crc kubenswrapper[5117]: I0123 09:06:26.256836 5117 generic.go:358] "Generic (PLEG): container finished" podID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerID="7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c" exitCode=0 Jan 23 09:06:26 crc kubenswrapper[5117]: I0123 09:06:26.257067 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48h8" event={"ID":"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e","Type":"ContainerDied","Data":"7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c"} Jan 23 09:06:27 crc kubenswrapper[5117]: I0123 09:06:27.265024 5117 generic.go:358] "Generic (PLEG): container finished" podID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerID="059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73" exitCode=0 Jan 23 09:06:27 crc kubenswrapper[5117]: I0123 09:06:27.265104 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48h8" event={"ID":"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e","Type":"ContainerDied","Data":"059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73"} Jan 23 09:06:28 crc kubenswrapper[5117]: I0123 09:06:28.276985 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48h8" event={"ID":"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e","Type":"ContainerStarted","Data":"301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332"} Jan 23 09:06:28 crc kubenswrapper[5117]: I0123 09:06:28.300129 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-marketplace-j48h8" podStartSLOduration=3.728084163 podStartE2EDuration="4.300105294s" podCreationTimestamp="2026-01-23 09:06:24 +0000 UTC" firstStartedPulling="2026-01-23 09:06:26.258242216 +0000 UTC m=+798.014367282" lastFinishedPulling="2026-01-23 09:06:26.830263357 +0000 UTC m=+798.586388413" observedRunningTime="2026-01-23 09:06:28.294757199 +0000 UTC m=+800.050882235" watchObservedRunningTime="2026-01-23 09:06:28.300105294 +0000 UTC m=+800.056230330" Jan 23 09:06:34 crc kubenswrapper[5117]: I0123 09:06:34.762505 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:34 crc kubenswrapper[5117]: I0123 09:06:34.763122 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:34 crc kubenswrapper[5117]: I0123 09:06:34.797834 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:35 crc kubenswrapper[5117]: I0123 09:06:35.356373 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:35 crc kubenswrapper[5117]: I0123 09:06:35.399604 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48h8"] Jan 23 09:06:37 crc kubenswrapper[5117]: I0123 09:06:37.326997 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-j48h8" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="registry-server" containerID="cri-o://301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332" gracePeriod=2 Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.335681 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.341176 5117 generic.go:358] "Generic (PLEG): container finished" podID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerID="301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332" exitCode=0 Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.341261 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-j48h8" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.341278 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48h8" event={"ID":"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e","Type":"ContainerDied","Data":"301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332"} Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.341416 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-j48h8" event={"ID":"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e","Type":"ContainerDied","Data":"692fa40f60b9128460201b3a8efb9d15e42e1ee391d37f830f06ff090b9eebff"} Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.341447 5117 scope.go:117] "RemoveContainer" containerID="301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.362550 5117 scope.go:117] "RemoveContainer" containerID="059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.387464 5117 scope.go:117] "RemoveContainer" containerID="7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.402837 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-catalog-content\") pod \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.402991 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jff74\" (UniqueName: \"kubernetes.io/projected/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-kube-api-access-jff74\") pod \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.403161 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-utilities\") pod \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\" (UID: \"49a5cff0-fda4-4b70-b2bf-7911eb86fd1e\") " Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.404696 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-utilities" (OuterVolumeSpecName: "utilities") pod "49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" (UID: "49a5cff0-fda4-4b70-b2bf-7911eb86fd1e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.405427 5117 scope.go:117] "RemoveContainer" containerID="301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332" Jan 23 09:06:39 crc kubenswrapper[5117]: E0123 09:06:39.405896 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332\": container with ID starting with 301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332 not found: ID does not exist" containerID="301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.405941 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332"} err="failed to get container status \"301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332\": rpc error: code = NotFound desc = could not find container \"301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332\": container with ID starting with 301b5b15086ffb40b1705c818dacb35bc6e634ebbe75bd641d6399d868a2d332 not found: ID does not exist" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.405969 5117 scope.go:117] "RemoveContainer" containerID="059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73" Jan 23 09:06:39 crc kubenswrapper[5117]: E0123 09:06:39.406369 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73\": container with ID starting with 059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73 not found: ID does not exist" containerID="059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.406408 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73"} err="failed to get container status \"059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73\": rpc error: code = NotFound desc = could not find container \"059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73\": container with ID starting with 059c99a4f376869b57a7e6af52a50597b74fba2eb1ad750c07740945cbd04d73 not found: ID does not exist" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.406435 5117 scope.go:117] "RemoveContainer" containerID="7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c" Jan 23 09:06:39 crc kubenswrapper[5117]: E0123 09:06:39.406687 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c\": container with ID starting with 7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c not found: ID does not exist" containerID="7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.406720 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c"} err="failed to get container status \"7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c\": rpc error: code = NotFound desc = could not find container \"7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c\": container with ID starting with 7c48a0bec909c0a39e4d3a4e807ce27d642c619913c2909a8dd4d9f8bf89d20c not found: ID does not exist" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.409693 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-kube-api-access-jff74" (OuterVolumeSpecName: "kube-api-access-jff74") pod "49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" (UID: "49a5cff0-fda4-4b70-b2bf-7911eb86fd1e"). InnerVolumeSpecName "kube-api-access-jff74". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.418569 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" (UID: "49a5cff0-fda4-4b70-b2bf-7911eb86fd1e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.504397 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.504631 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.504720 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jff74\" (UniqueName: \"kubernetes.io/projected/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e-kube-api-access-jff74\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.674541 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48h8"] Jan 23 09:06:39 crc kubenswrapper[5117]: I0123 09:06:39.677394 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-j48h8"] Jan 23 09:06:40 crc kubenswrapper[5117]: I0123 09:06:40.778380 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" path="/var/lib/kubelet/pods/49a5cff0-fda4-4b70-b2bf-7911eb86fd1e/volumes" Jan 23 09:06:43 crc kubenswrapper[5117]: I0123 09:06:43.075927 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8st29"] Jan 23 09:06:43 crc kubenswrapper[5117]: I0123 09:06:43.076530 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-marketplace-8st29" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="registry-server" containerID="cri-o://08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f" gracePeriod=30 Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.075484 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.173447 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-catalog-content\") pod \"e4da0216-1017-40ba-b67e-1fdef15faa65\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.173511 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-utilities\") pod \"e4da0216-1017-40ba-b67e-1fdef15faa65\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.173535 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-fk7ms\" (UniqueName: \"kubernetes.io/projected/e4da0216-1017-40ba-b67e-1fdef15faa65-kube-api-access-fk7ms\") pod \"e4da0216-1017-40ba-b67e-1fdef15faa65\" (UID: \"e4da0216-1017-40ba-b67e-1fdef15faa65\") " Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.176432 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-utilities" (OuterVolumeSpecName: "utilities") pod "e4da0216-1017-40ba-b67e-1fdef15faa65" (UID: "e4da0216-1017-40ba-b67e-1fdef15faa65"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.178802 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e4da0216-1017-40ba-b67e-1fdef15faa65-kube-api-access-fk7ms" (OuterVolumeSpecName: "kube-api-access-fk7ms") pod "e4da0216-1017-40ba-b67e-1fdef15faa65" (UID: "e4da0216-1017-40ba-b67e-1fdef15faa65"). InnerVolumeSpecName "kube-api-access-fk7ms". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.185396 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e4da0216-1017-40ba-b67e-1fdef15faa65" (UID: "e4da0216-1017-40ba-b67e-1fdef15faa65"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.275265 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.275314 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-fk7ms\" (UniqueName: \"kubernetes.io/projected/e4da0216-1017-40ba-b67e-1fdef15faa65-kube-api-access-fk7ms\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.275327 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e4da0216-1017-40ba-b67e-1fdef15faa65-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.377279 5117 generic.go:358] "Generic (PLEG): container finished" podID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerID="08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f" exitCode=0 Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.377506 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8st29" event={"ID":"e4da0216-1017-40ba-b67e-1fdef15faa65","Type":"ContainerDied","Data":"08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f"} Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.377540 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-marketplace-8st29" event={"ID":"e4da0216-1017-40ba-b67e-1fdef15faa65","Type":"ContainerDied","Data":"8bd2853da430fe2988f34edc417f9e075f26e11effa1094fe6098610880982d4"} Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.377561 5117 scope.go:117] "RemoveContainer" containerID="08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.377727 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-marketplace-8st29" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.405731 5117 scope.go:117] "RemoveContainer" containerID="31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.406441 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-marketplace-8st29"] Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.412093 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-marketplace-8st29"] Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.419389 5117 scope.go:117] "RemoveContainer" containerID="86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.435079 5117 scope.go:117] "RemoveContainer" containerID="08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f" Jan 23 09:06:44 crc kubenswrapper[5117]: E0123 09:06:44.435725 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f\": container with ID starting with 08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f not found: ID does not exist" containerID="08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.435780 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f"} err="failed to get container status \"08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f\": rpc error: code = NotFound desc = could not find container \"08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f\": container with ID starting with 08402ab552cc056f84ef3db6153b42d7112c18d33481283125cab6fe6e66a90f not found: ID does not exist" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.435809 5117 scope.go:117] "RemoveContainer" containerID="31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76" Jan 23 09:06:44 crc kubenswrapper[5117]: E0123 09:06:44.436211 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76\": container with ID starting with 31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76 not found: ID does not exist" containerID="31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.436262 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76"} err="failed to get container status \"31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76\": rpc error: code = NotFound desc = could not find container \"31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76\": container with ID starting with 31cccb88e5aca319d6c16d551794b77be1628efcc45a261c0ea3a0d60a683e76 not found: ID does not exist" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.436288 5117 scope.go:117] "RemoveContainer" containerID="86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127" Jan 23 09:06:44 crc kubenswrapper[5117]: E0123 09:06:44.436572 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127\": container with ID starting with 86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127 not found: ID does not exist" containerID="86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.436612 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127"} err="failed to get container status \"86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127\": rpc error: code = NotFound desc = could not find container \"86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127\": container with ID starting with 86b3a145408c43e5bca410caed4708c744b43d2c060ece517cff7d9a58152127 not found: ID does not exist" Jan 23 09:06:44 crc kubenswrapper[5117]: I0123 09:06:44.781657 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" path="/var/lib/kubelet/pods/e4da0216-1017-40ba-b67e-1fdef15faa65/volumes" Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.063070 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.063812 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.064035 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.065498 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"652ecc0b605bacfe2a16dd899d40ace6b8068602de3571538cddd08bf11b07a4"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.067036 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://652ecc0b605bacfe2a16dd899d40ace6b8068602de3571538cddd08bf11b07a4" gracePeriod=600 Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.389107 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="652ecc0b605bacfe2a16dd899d40ace6b8068602de3571538cddd08bf11b07a4" exitCode=0 Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.389186 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"652ecc0b605bacfe2a16dd899d40ace6b8068602de3571538cddd08bf11b07a4"} Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.389849 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"cca1814ea22f487f6803da65dbe0a07d6e9a455a9d99b67f2cffa31f9de502dd"} Jan 23 09:06:45 crc kubenswrapper[5117]: I0123 09:06:45.389886 5117 scope.go:117] "RemoveContainer" containerID="b97661a90bbce2393db26e77bd29a143f0d98233c5c69fccbeb1b1da6802d3e6" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.786597 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf"] Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787450 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="extract-utilities" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787465 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="extract-utilities" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787481 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="extract-content" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787487 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="extract-content" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787501 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="registry-server" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787508 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="registry-server" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787520 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="registry-server" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787525 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="registry-server" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787533 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="extract-utilities" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787539 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="extract-utilities" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787549 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="extract-content" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787554 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="extract-content" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787659 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e4da0216-1017-40ba-b67e-1fdef15faa65" containerName="registry-server" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.787679 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="49a5cff0-fda4-4b70-b2bf-7911eb86fd1e" containerName="registry-server" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.791342 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.794693 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.800071 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf"] Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.907957 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.908015 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-57tcc\" (UniqueName: \"kubernetes.io/projected/929b31d5-d5f7-4610-960d-bcb4817b7eee-kube-api-access-57tcc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:46 crc kubenswrapper[5117]: I0123 09:06:46.908187 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.009520 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.009576 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-57tcc\" (UniqueName: \"kubernetes.io/projected/929b31d5-d5f7-4610-960d-bcb4817b7eee-kube-api-access-57tcc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.009654 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.010017 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-util\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.010151 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-bundle\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.028495 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-57tcc\" (UniqueName: \"kubernetes.io/projected/929b31d5-d5f7-4610-960d-bcb4817b7eee-kube-api-access-57tcc\") pod \"98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.117557 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:47 crc kubenswrapper[5117]: I0123 09:06:47.598710 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf"] Jan 23 09:06:47 crc kubenswrapper[5117]: W0123 09:06:47.604819 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod929b31d5_d5f7_4610_960d_bcb4817b7eee.slice/crio-afd6074136884dc4b77d7fc18cba711f487735b94ab903ba167cec67be91e236 WatchSource:0}: Error finding container afd6074136884dc4b77d7fc18cba711f487735b94ab903ba167cec67be91e236: Status 404 returned error can't find the container with id afd6074136884dc4b77d7fc18cba711f487735b94ab903ba167cec67be91e236 Jan 23 09:06:48 crc kubenswrapper[5117]: I0123 09:06:48.410735 5117 generic.go:358] "Generic (PLEG): container finished" podID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerID="9251df7f04a07cf93e6c90bdf3494a536f49edef43592a15b21fb1a8333ddce8" exitCode=0 Jan 23 09:06:48 crc kubenswrapper[5117]: I0123 09:06:48.411164 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" event={"ID":"929b31d5-d5f7-4610-960d-bcb4817b7eee","Type":"ContainerDied","Data":"9251df7f04a07cf93e6c90bdf3494a536f49edef43592a15b21fb1a8333ddce8"} Jan 23 09:06:48 crc kubenswrapper[5117]: I0123 09:06:48.411245 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" event={"ID":"929b31d5-d5f7-4610-960d-bcb4817b7eee","Type":"ContainerStarted","Data":"afd6074136884dc4b77d7fc18cba711f487735b94ab903ba167cec67be91e236"} Jan 23 09:06:50 crc kubenswrapper[5117]: I0123 09:06:50.425296 5117 generic.go:358] "Generic (PLEG): container finished" podID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerID="269d994e1b263f702e9c930338d2fcb7c8f15a09f1686ad11772ed216b8f3d1c" exitCode=0 Jan 23 09:06:50 crc kubenswrapper[5117]: I0123 09:06:50.425393 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" event={"ID":"929b31d5-d5f7-4610-960d-bcb4817b7eee","Type":"ContainerDied","Data":"269d994e1b263f702e9c930338d2fcb7c8f15a09f1686ad11772ed216b8f3d1c"} Jan 23 09:06:51 crc kubenswrapper[5117]: I0123 09:06:51.434473 5117 generic.go:358] "Generic (PLEG): container finished" podID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerID="064630d444f70ea7871007a40003f7f7923f815f2025a0a83a25856483e33a3a" exitCode=0 Jan 23 09:06:51 crc kubenswrapper[5117]: I0123 09:06:51.434540 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" event={"ID":"929b31d5-d5f7-4610-960d-bcb4817b7eee","Type":"ContainerDied","Data":"064630d444f70ea7871007a40003f7f7923f815f2025a0a83a25856483e33a3a"} Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.646337 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.796175 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-57tcc\" (UniqueName: \"kubernetes.io/projected/929b31d5-d5f7-4610-960d-bcb4817b7eee-kube-api-access-57tcc\") pod \"929b31d5-d5f7-4610-960d-bcb4817b7eee\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.796540 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-bundle\") pod \"929b31d5-d5f7-4610-960d-bcb4817b7eee\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.796562 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-util\") pod \"929b31d5-d5f7-4610-960d-bcb4817b7eee\" (UID: \"929b31d5-d5f7-4610-960d-bcb4817b7eee\") " Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.799301 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-bundle" (OuterVolumeSpecName: "bundle") pod "929b31d5-d5f7-4610-960d-bcb4817b7eee" (UID: "929b31d5-d5f7-4610-960d-bcb4817b7eee"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.803867 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/929b31d5-d5f7-4610-960d-bcb4817b7eee-kube-api-access-57tcc" (OuterVolumeSpecName: "kube-api-access-57tcc") pod "929b31d5-d5f7-4610-960d-bcb4817b7eee" (UID: "929b31d5-d5f7-4610-960d-bcb4817b7eee"). InnerVolumeSpecName "kube-api-access-57tcc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.813611 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-util" (OuterVolumeSpecName: "util") pod "929b31d5-d5f7-4610-960d-bcb4817b7eee" (UID: "929b31d5-d5f7-4610-960d-bcb4817b7eee"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.897943 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.897995 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/929b31d5-d5f7-4610-960d-bcb4817b7eee-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:52 crc kubenswrapper[5117]: I0123 09:06:52.898009 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-57tcc\" (UniqueName: \"kubernetes.io/projected/929b31d5-d5f7-4610-960d-bcb4817b7eee-kube-api-access-57tcc\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.182554 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd"] Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183268 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="util" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183285 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="util" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183306 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="pull" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183312 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="pull" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183326 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="extract" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183332 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="extract" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.183423 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="929b31d5-d5f7-4610-960d-bcb4817b7eee" containerName="extract" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.188039 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.193911 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd"] Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.303456 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.303766 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.303866 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-25vgl\" (UniqueName: \"kubernetes.io/projected/672c3b18-7345-48b9-ad22-86920cf6e02d-kube-api-access-25vgl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.405823 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-25vgl\" (UniqueName: \"kubernetes.io/projected/672c3b18-7345-48b9-ad22-86920cf6e02d-kube-api-access-25vgl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.405897 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.405946 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.406560 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-bundle\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.406593 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-util\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.436779 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-25vgl\" (UniqueName: \"kubernetes.io/projected/672c3b18-7345-48b9-ad22-86920cf6e02d-kube-api-access-25vgl\") pod \"8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.450307 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" event={"ID":"929b31d5-d5f7-4610-960d-bcb4817b7eee","Type":"ContainerDied","Data":"afd6074136884dc4b77d7fc18cba711f487735b94ab903ba167cec67be91e236"} Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.450363 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="afd6074136884dc4b77d7fc18cba711f487735b94ab903ba167cec67be91e236" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.450385 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.502895 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:53 crc kubenswrapper[5117]: I0123 09:06:53.992292 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd"] Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.178969 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w"] Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.185961 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.189041 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w"] Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.319802 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-phrf4\" (UniqueName: \"kubernetes.io/projected/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-kube-api-access-phrf4\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.319855 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.319890 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.421341 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.421665 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-phrf4\" (UniqueName: \"kubernetes.io/projected/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-kube-api-access-phrf4\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.421771 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.422219 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-util\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.422298 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-bundle\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.443783 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-phrf4\" (UniqueName: \"kubernetes.io/projected/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-kube-api-access-phrf4\") pod \"6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.458524 5117 generic.go:358] "Generic (PLEG): container finished" podID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerID="0f8c11b6a15413ab2b145724a24dcd38a8db53704b013cfce0dbfeada8e70287" exitCode=0 Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.458648 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" event={"ID":"672c3b18-7345-48b9-ad22-86920cf6e02d","Type":"ContainerDied","Data":"0f8c11b6a15413ab2b145724a24dcd38a8db53704b013cfce0dbfeada8e70287"} Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.459003 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" event={"ID":"672c3b18-7345-48b9-ad22-86920cf6e02d","Type":"ContainerStarted","Data":"d97bbb1cc5c1a06080d2f2bf8aca9d847526b449095db349e4df354247c32bb0"} Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.511128 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:54 crc kubenswrapper[5117]: I0123 09:06:54.711200 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w"] Jan 23 09:06:55 crc kubenswrapper[5117]: I0123 09:06:55.467741 5117 generic.go:358] "Generic (PLEG): container finished" podID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerID="96286103749f75c538742c024b84f0222e2f50bedcb584e24af2a9c0dc56caeb" exitCode=0 Jan 23 09:06:55 crc kubenswrapper[5117]: I0123 09:06:55.467867 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" event={"ID":"66db71e8-761f-4bae-a7d3-c48c50f8c4f4","Type":"ContainerDied","Data":"96286103749f75c538742c024b84f0222e2f50bedcb584e24af2a9c0dc56caeb"} Jan 23 09:06:55 crc kubenswrapper[5117]: I0123 09:06:55.468448 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" event={"ID":"66db71e8-761f-4bae-a7d3-c48c50f8c4f4","Type":"ContainerStarted","Data":"4b3901c59c184c049537e0e380d73f5bd28aeed282e3b8ef68f2815f43851cb4"} Jan 23 09:06:55 crc kubenswrapper[5117]: I0123 09:06:55.472217 5117 generic.go:358] "Generic (PLEG): container finished" podID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerID="81a81c52fa2de9789e815334409f7ebc79f79df7bffbb463a89d1977ddff947f" exitCode=0 Jan 23 09:06:55 crc kubenswrapper[5117]: I0123 09:06:55.472284 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" event={"ID":"672c3b18-7345-48b9-ad22-86920cf6e02d","Type":"ContainerDied","Data":"81a81c52fa2de9789e815334409f7ebc79f79df7bffbb463a89d1977ddff947f"} Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.480389 5117 generic.go:358] "Generic (PLEG): container finished" podID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerID="3b9ffb1db8bef4b31085e6234fb40ca94ef64b387767a705bcc491a5360d302b" exitCode=0 Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.480511 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" event={"ID":"672c3b18-7345-48b9-ad22-86920cf6e02d","Type":"ContainerDied","Data":"3b9ffb1db8bef4b31085e6234fb40ca94ef64b387767a705bcc491a5360d302b"} Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.748213 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-2bzps"] Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.756772 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.760716 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2bzps"] Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.854169 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-catalog-content\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.854231 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-utilities\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.854301 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qdvkt\" (UniqueName: \"kubernetes.io/projected/28473e88-fc17-4102-95da-44da0dd3e500-kube-api-access-qdvkt\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.955815 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-catalog-content\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.955899 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-utilities\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.955959 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qdvkt\" (UniqueName: \"kubernetes.io/projected/28473e88-fc17-4102-95da-44da0dd3e500-kube-api-access-qdvkt\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.956327 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-catalog-content\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.956363 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-utilities\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:56 crc kubenswrapper[5117]: I0123 09:06:56.988801 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qdvkt\" (UniqueName: \"kubernetes.io/projected/28473e88-fc17-4102-95da-44da0dd3e500-kube-api-access-qdvkt\") pod \"community-operators-2bzps\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:57 crc kubenswrapper[5117]: I0123 09:06:57.076672 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:06:57 crc kubenswrapper[5117]: I0123 09:06:57.447361 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-2bzps"] Jan 23 09:06:57 crc kubenswrapper[5117]: I0123 09:06:57.486624 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerStarted","Data":"b0bb0d0f95b219177ab7b6cf4287742c928a19ca3c7c77557629df329a61f798"} Jan 23 09:06:57 crc kubenswrapper[5117]: I0123 09:06:57.488241 5117 generic.go:358] "Generic (PLEG): container finished" podID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerID="27f29c81c447030d334257ea293f849d24b1bbeb7eb0435f4f8f8dcf64894076" exitCode=0 Jan 23 09:06:57 crc kubenswrapper[5117]: I0123 09:06:57.488356 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" event={"ID":"66db71e8-761f-4bae-a7d3-c48c50f8c4f4","Type":"ContainerDied","Data":"27f29c81c447030d334257ea293f849d24b1bbeb7eb0435f4f8f8dcf64894076"} Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.003273 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.075700 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-25vgl\" (UniqueName: \"kubernetes.io/projected/672c3b18-7345-48b9-ad22-86920cf6e02d-kube-api-access-25vgl\") pod \"672c3b18-7345-48b9-ad22-86920cf6e02d\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.076043 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-bundle\") pod \"672c3b18-7345-48b9-ad22-86920cf6e02d\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.076215 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-util\") pod \"672c3b18-7345-48b9-ad22-86920cf6e02d\" (UID: \"672c3b18-7345-48b9-ad22-86920cf6e02d\") " Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.076919 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-bundle" (OuterVolumeSpecName: "bundle") pod "672c3b18-7345-48b9-ad22-86920cf6e02d" (UID: "672c3b18-7345-48b9-ad22-86920cf6e02d"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.084997 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/672c3b18-7345-48b9-ad22-86920cf6e02d-kube-api-access-25vgl" (OuterVolumeSpecName: "kube-api-access-25vgl") pod "672c3b18-7345-48b9-ad22-86920cf6e02d" (UID: "672c3b18-7345-48b9-ad22-86920cf6e02d"). InnerVolumeSpecName "kube-api-access-25vgl". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.095759 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-util" (OuterVolumeSpecName: "util") pod "672c3b18-7345-48b9-ad22-86920cf6e02d" (UID: "672c3b18-7345-48b9-ad22-86920cf6e02d"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.177792 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-25vgl\" (UniqueName: \"kubernetes.io/projected/672c3b18-7345-48b9-ad22-86920cf6e02d-kube-api-access-25vgl\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.177836 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.177848 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/672c3b18-7345-48b9-ad22-86920cf6e02d-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:58 crc kubenswrapper[5117]: E0123 09:06:58.277145 5117 cadvisor_stats_provider.go:525] "Partial failure issuing cadvisor.ContainerInfoV2" err="partial failures: [\"/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod66db71e8_761f_4bae_a7d3_c48c50f8c4f4.slice/crio-aec1c182ec94aa7e1c927803f9929214feb428514834f0acdd6593441e9908bc.scope\": RecentStats: unable to find data in memory cache]" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.496466 5117 generic.go:358] "Generic (PLEG): container finished" podID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerID="aec1c182ec94aa7e1c927803f9929214feb428514834f0acdd6593441e9908bc" exitCode=0 Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.496523 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" event={"ID":"66db71e8-761f-4bae-a7d3-c48c50f8c4f4","Type":"ContainerDied","Data":"aec1c182ec94aa7e1c927803f9929214feb428514834f0acdd6593441e9908bc"} Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.499251 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.499250 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd" event={"ID":"672c3b18-7345-48b9-ad22-86920cf6e02d","Type":"ContainerDied","Data":"d97bbb1cc5c1a06080d2f2bf8aca9d847526b449095db349e4df354247c32bb0"} Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.499378 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d97bbb1cc5c1a06080d2f2bf8aca9d847526b449095db349e4df354247c32bb0" Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.500972 5117 generic.go:358] "Generic (PLEG): container finished" podID="28473e88-fc17-4102-95da-44da0dd3e500" containerID="d091efc4a02b96eebbe1926f3ab77a4484064d97a4e899dec9cc526f81bae08d" exitCode=0 Jan 23 09:06:58 crc kubenswrapper[5117]: I0123 09:06:58.501057 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerDied","Data":"d091efc4a02b96eebbe1926f3ab77a4484064d97a4e899dec9cc526f81bae08d"} Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.509002 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerStarted","Data":"0db5ed479c9d76bbdbf4ee968edc685597c351dc58354439310b9fa1d040075b"} Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.867741 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.897194 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-util\") pod \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.897334 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-bundle\") pod \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.897869 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-bundle" (OuterVolumeSpecName: "bundle") pod "66db71e8-761f-4bae-a7d3-c48c50f8c4f4" (UID: "66db71e8-761f-4bae-a7d3-c48c50f8c4f4"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.897953 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-phrf4\" (UniqueName: \"kubernetes.io/projected/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-kube-api-access-phrf4\") pod \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\" (UID: \"66db71e8-761f-4bae-a7d3-c48c50f8c4f4\") " Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.898984 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.910438 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-kube-api-access-phrf4" (OuterVolumeSpecName: "kube-api-access-phrf4") pod "66db71e8-761f-4bae-a7d3-c48c50f8c4f4" (UID: "66db71e8-761f-4bae-a7d3-c48c50f8c4f4"). InnerVolumeSpecName "kube-api-access-phrf4". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:06:59 crc kubenswrapper[5117]: I0123 09:06:59.910798 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-util" (OuterVolumeSpecName: "util") pod "66db71e8-761f-4bae-a7d3-c48c50f8c4f4" (UID: "66db71e8-761f-4bae-a7d3-c48c50f8c4f4"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.000643 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.000891 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-phrf4\" (UniqueName: \"kubernetes.io/projected/66db71e8-761f-4bae-a7d3-c48c50f8c4f4-kube-api-access-phrf4\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.515532 5117 generic.go:358] "Generic (PLEG): container finished" podID="28473e88-fc17-4102-95da-44da0dd3e500" containerID="0db5ed479c9d76bbdbf4ee968edc685597c351dc58354439310b9fa1d040075b" exitCode=0 Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.515733 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerDied","Data":"0db5ed479c9d76bbdbf4ee968edc685597c351dc58354439310b9fa1d040075b"} Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.519240 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" event={"ID":"66db71e8-761f-4bae-a7d3-c48c50f8c4f4","Type":"ContainerDied","Data":"4b3901c59c184c049537e0e380d73f5bd28aeed282e3b8ef68f2815f43851cb4"} Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.519384 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="4b3901c59c184c049537e0e380d73f5bd28aeed282e3b8ef68f2815f43851cb4" Jan 23 09:07:00 crc kubenswrapper[5117]: I0123 09:07:00.519423 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w" Jan 23 09:07:01 crc kubenswrapper[5117]: I0123 09:07:01.527515 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerStarted","Data":"62a93081cb11f7528378fef5cf379ba2c3f953fc03049523414b49ccf2b4ac14"} Jan 23 09:07:01 crc kubenswrapper[5117]: I0123 09:07:01.547453 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-2bzps" podStartSLOduration=4.932940336 podStartE2EDuration="5.547438103s" podCreationTimestamp="2026-01-23 09:06:56 +0000 UTC" firstStartedPulling="2026-01-23 09:06:58.501768834 +0000 UTC m=+830.257893860" lastFinishedPulling="2026-01-23 09:06:59.116266601 +0000 UTC m=+830.872391627" observedRunningTime="2026-01-23 09:07:01.542300723 +0000 UTC m=+833.298425749" watchObservedRunningTime="2026-01-23 09:07:01.547438103 +0000 UTC m=+833.303563129" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.397878 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z"] Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398648 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="util" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398677 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="util" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398708 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="pull" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398717 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="pull" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398733 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="extract" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398741 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="extract" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398752 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="pull" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398760 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="pull" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398771 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="util" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398777 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="util" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398786 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="extract" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398794 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="extract" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398929 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="66db71e8-761f-4bae-a7d3-c48c50f8c4f4" containerName="extract" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.398951 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="672c3b18-7345-48b9-ad22-86920cf6e02d" containerName="extract" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.416185 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z"] Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.416464 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.418968 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-marketplace\"/\"default-dockercfg-b2ccr\"" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.431664 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-m9dhb\" (UniqueName: \"kubernetes.io/projected/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-kube-api-access-m9dhb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.431705 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.431735 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.533977 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.534069 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-m9dhb\" (UniqueName: \"kubernetes.io/projected/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-kube-api-access-m9dhb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.534095 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.534517 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-bundle\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.534791 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-util\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.566198 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-m9dhb\" (UniqueName: \"kubernetes.io/projected/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-kube-api-access-m9dhb\") pod \"1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.732035 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.959520 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs"] Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.967392 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.972660 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"openshift-service-ca.crt\"" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.972965 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-dockercfg-l82b9\"" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.973096 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operators\"/\"kube-root-ca.crt\"" Jan 23 09:07:02 crc kubenswrapper[5117]: I0123 09:07:02.977863 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.041696 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-c7t9l\" (UniqueName: \"kubernetes.io/projected/b2f00157-5589-44da-862e-a7686842803f-kube-api-access-c7t9l\") pod \"obo-prometheus-operator-9bc85b4bf-kzqgs\" (UID: \"b2f00157-5589-44da-862e-a7686842803f\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.082680 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.100745 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.100900 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.107555 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.110815 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.118167 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-dockercfg-675wd\"" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.118546 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"obo-prometheus-operator-admission-webhook-service-cert\"" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.122751 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.142630 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2\" (UID: \"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.142686 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3846cc0-7f86-492a-a1df-8d92f3fd5094-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4\" (UID: \"d3846cc0-7f86-492a-a1df-8d92f3fd5094\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.142765 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2\" (UID: \"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.142820 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3846cc0-7f86-492a-a1df-8d92f3fd5094-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4\" (UID: \"d3846cc0-7f86-492a-a1df-8d92f3fd5094\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.142964 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-c7t9l\" (UniqueName: \"kubernetes.io/projected/b2f00157-5589-44da-862e-a7686842803f-kube-api-access-c7t9l\") pod \"obo-prometheus-operator-9bc85b4bf-kzqgs\" (UID: \"b2f00157-5589-44da-862e-a7686842803f\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.171122 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-c7t9l\" (UniqueName: \"kubernetes.io/projected/b2f00157-5589-44da-862e-a7686842803f-kube-api-access-c7t9l\") pod \"obo-prometheus-operator-9bc85b4bf-kzqgs\" (UID: \"b2f00157-5589-44da-862e-a7686842803f\") " pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.211151 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.243687 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2\" (UID: \"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.243838 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3846cc0-7f86-492a-a1df-8d92f3fd5094-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4\" (UID: \"d3846cc0-7f86-492a-a1df-8d92f3fd5094\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.243877 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2\" (UID: \"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.243896 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3846cc0-7f86-492a-a1df-8d92f3fd5094-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4\" (UID: \"d3846cc0-7f86-492a-a1df-8d92f3fd5094\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.247514 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d3846cc0-7f86-492a-a1df-8d92f3fd5094-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4\" (UID: \"d3846cc0-7f86-492a-a1df-8d92f3fd5094\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.248581 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052-apiservice-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2\" (UID: \"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.249009 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d3846cc0-7f86-492a-a1df-8d92f3fd5094-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4\" (UID: \"d3846cc0-7f86-492a-a1df-8d92f3fd5094\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.249039 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052-webhook-cert\") pod \"obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2\" (UID: \"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052\") " pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.286869 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.293207 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/observability-operator-85c68dddb-4cs2d"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.306969 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.309804 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-4cs2d"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.311674 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-tls\"" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.311987 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"observability-operator-sa-dockercfg-zncrx\"" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.345147 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e628d153-ac5d-4e3f-8b44-e30ada549a31-observability-operator-tls\") pod \"observability-operator-85c68dddb-4cs2d\" (UID: \"e628d153-ac5d-4e3f-8b44-e30ada549a31\") " pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.345201 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8khr4\" (UniqueName: \"kubernetes.io/projected/e628d153-ac5d-4e3f-8b44-e30ada549a31-kube-api-access-8khr4\") pod \"observability-operator-85c68dddb-4cs2d\" (UID: \"e628d153-ac5d-4e3f-8b44-e30ada549a31\") " pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.407246 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-z7w5m"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.414510 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.416765 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operators\"/\"perses-operator-dockercfg-n24wq\"" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.417846 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.429146 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-z7w5m"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.429306 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.447191 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-sfmkb\" (UniqueName: \"kubernetes.io/projected/bb3c5e0d-73e6-4766-9f06-dc44d438a55c-kube-api-access-sfmkb\") pod \"perses-operator-669c9f96b5-z7w5m\" (UID: \"bb3c5e0d-73e6-4766-9f06-dc44d438a55c\") " pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.447291 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e628d153-ac5d-4e3f-8b44-e30ada549a31-observability-operator-tls\") pod \"observability-operator-85c68dddb-4cs2d\" (UID: \"e628d153-ac5d-4e3f-8b44-e30ada549a31\") " pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.447323 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8khr4\" (UniqueName: \"kubernetes.io/projected/e628d153-ac5d-4e3f-8b44-e30ada549a31-kube-api-access-8khr4\") pod \"observability-operator-85c68dddb-4cs2d\" (UID: \"e628d153-ac5d-4e3f-8b44-e30ada549a31\") " pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.448699 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb3c5e0d-73e6-4766-9f06-dc44d438a55c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-z7w5m\" (UID: \"bb3c5e0d-73e6-4766-9f06-dc44d438a55c\") " pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.453808 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"observability-operator-tls\" (UniqueName: \"kubernetes.io/secret/e628d153-ac5d-4e3f-8b44-e30ada549a31-observability-operator-tls\") pod \"observability-operator-85c68dddb-4cs2d\" (UID: \"e628d153-ac5d-4e3f-8b44-e30ada549a31\") " pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.466561 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8khr4\" (UniqueName: \"kubernetes.io/projected/e628d153-ac5d-4e3f-8b44-e30ada549a31-kube-api-access-8khr4\") pod \"observability-operator-85c68dddb-4cs2d\" (UID: \"e628d153-ac5d-4e3f-8b44-e30ada549a31\") " pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.506097 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs"] Jan 23 09:07:03 crc kubenswrapper[5117]: W0123 09:07:03.521539 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb2f00157_5589_44da_862e_a7686842803f.slice/crio-4ebfd2a14e74e0d14ce77620df4edf0b20e434276896a8bdf7c2b9cbbcc30ee0 WatchSource:0}: Error finding container 4ebfd2a14e74e0d14ce77620df4edf0b20e434276896a8bdf7c2b9cbbcc30ee0: Status 404 returned error can't find the container with id 4ebfd2a14e74e0d14ce77620df4edf0b20e434276896a8bdf7c2b9cbbcc30ee0 Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.543180 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" event={"ID":"b2f00157-5589-44da-862e-a7686842803f","Type":"ContainerStarted","Data":"4ebfd2a14e74e0d14ce77620df4edf0b20e434276896a8bdf7c2b9cbbcc30ee0"} Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.549874 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-sfmkb\" (UniqueName: \"kubernetes.io/projected/bb3c5e0d-73e6-4766-9f06-dc44d438a55c-kube-api-access-sfmkb\") pod \"perses-operator-669c9f96b5-z7w5m\" (UID: \"bb3c5e0d-73e6-4766-9f06-dc44d438a55c\") " pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.550002 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb3c5e0d-73e6-4766-9f06-dc44d438a55c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-z7w5m\" (UID: \"bb3c5e0d-73e6-4766-9f06-dc44d438a55c\") " pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.551955 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"openshift-service-ca\" (UniqueName: \"kubernetes.io/configmap/bb3c5e0d-73e6-4766-9f06-dc44d438a55c-openshift-service-ca\") pod \"perses-operator-669c9f96b5-z7w5m\" (UID: \"bb3c5e0d-73e6-4766-9f06-dc44d438a55c\") " pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.571248 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" event={"ID":"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c","Type":"ContainerDied","Data":"2e4a2c5a215edfd992c4e354887badf264c2e36a99df2fc00052d58618417a8a"} Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.571619 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerID="2e4a2c5a215edfd992c4e354887badf264c2e36a99df2fc00052d58618417a8a" exitCode=0 Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.572334 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" event={"ID":"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c","Type":"ContainerStarted","Data":"26b741efbaba153cee080fef1d0a8a2d799a4f5d5eba15407d6398efb242098a"} Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.608813 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-sfmkb\" (UniqueName: \"kubernetes.io/projected/bb3c5e0d-73e6-4766-9f06-dc44d438a55c-kube-api-access-sfmkb\") pod \"perses-operator-669c9f96b5-z7w5m\" (UID: \"bb3c5e0d-73e6-4766-9f06-dc44d438a55c\") " pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.627762 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.746396 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.793606 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.853094 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2"] Jan 23 09:07:03 crc kubenswrapper[5117]: I0123 09:07:03.988486 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/observability-operator-85c68dddb-4cs2d"] Jan 23 09:07:04 crc kubenswrapper[5117]: I0123 09:07:04.039559 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operators/perses-operator-669c9f96b5-z7w5m"] Jan 23 09:07:05 crc kubenswrapper[5117]: I0123 09:07:05.966744 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" event={"ID":"bb3c5e0d-73e6-4766-9f06-dc44d438a55c","Type":"ContainerStarted","Data":"03594d64c11348d445a75ea046e07a06bf569e8a95488599f465562f11775055"} Jan 23 09:07:05 crc kubenswrapper[5117]: I0123 09:07:05.971049 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" event={"ID":"d3846cc0-7f86-492a-a1df-8d92f3fd5094","Type":"ContainerStarted","Data":"165e312aa6e40401b5217a91b64cc1067a7ea36decd0ff7302ce572a7a69a730"} Jan 23 09:07:05 crc kubenswrapper[5117]: I0123 09:07:05.972875 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" event={"ID":"e628d153-ac5d-4e3f-8b44-e30ada549a31","Type":"ContainerStarted","Data":"afd537803c3d1c0f90d177c88086a879d2b030d2552209b10b3c84b28ea7439e"} Jan 23 09:07:05 crc kubenswrapper[5117]: I0123 09:07:05.981253 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" event={"ID":"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052","Type":"ContainerStarted","Data":"fe4d3bc2dcdd9a60926f923cb9979848de7ce2290b0b65f481656650f53e88fd"} Jan 23 09:07:07 crc kubenswrapper[5117]: I0123 09:07:07.083257 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:07:07 crc kubenswrapper[5117]: I0123 09:07:07.083597 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:07:07 crc kubenswrapper[5117]: I0123 09:07:07.180058 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:07:08 crc kubenswrapper[5117]: I0123 09:07:08.151409 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.205859 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elastic-operator-57956977b8-t9brt"] Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.978356 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-57956977b8-t9brt"] Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.979494 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.984419 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-service-cert\"" Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.984697 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elastic-operator-dockercfg-zn226\"" Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.986530 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"openshift-service-ca.crt\"" Jan 23 09:07:09 crc kubenswrapper[5117]: I0123 09:07:09.986888 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"kube-root-ca.crt\"" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.023577 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-webhook-cert\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.023626 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-apiservice-cert\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.023783 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-58vc9\" (UniqueName: \"kubernetes.io/projected/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-kube-api-access-58vc9\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.125071 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-58vc9\" (UniqueName: \"kubernetes.io/projected/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-kube-api-access-58vc9\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.125209 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-webhook-cert\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.125254 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-apiservice-cert\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.134205 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"webhook-cert\" (UniqueName: \"kubernetes.io/secret/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-webhook-cert\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.135235 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"apiservice-cert\" (UniqueName: \"kubernetes.io/secret/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-apiservice-cert\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.144958 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-58vc9\" (UniqueName: \"kubernetes.io/projected/d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425-kube-api-access-58vc9\") pod \"elastic-operator-57956977b8-t9brt\" (UID: \"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425\") " pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.307196 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elastic-operator-57956977b8-t9brt" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.618898 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elastic-operator-57956977b8-t9brt"] Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.738900 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2bzps"] Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.739202 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-2bzps" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="registry-server" containerID="cri-o://62a93081cb11f7528378fef5cf379ba2c3f953fc03049523414b49ccf2b4ac14" gracePeriod=2 Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.799274 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-6v6g2"] Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.829353 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-6v6g2"] Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.829971 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.838804 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"interconnect-operator-dockercfg-jhzb8\"" Jan 23 09:07:10 crc kubenswrapper[5117]: I0123 09:07:10.935671 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t5k5w\" (UniqueName: \"kubernetes.io/projected/4d7b0043-81f6-4990-80d6-526d3a214ec2-kube-api-access-t5k5w\") pod \"interconnect-operator-78b9bd8798-6v6g2\" (UID: \"4d7b0043-81f6-4990-80d6-526d3a214ec2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" Jan 23 09:07:11 crc kubenswrapper[5117]: I0123 09:07:11.033179 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-57956977b8-t9brt" event={"ID":"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425","Type":"ContainerStarted","Data":"0e94c5cb924e87af31c6d0c6f48242f6c1f2135ecfca9f4b86cc0cc12f2147b8"} Jan 23 09:07:11 crc kubenswrapper[5117]: I0123 09:07:11.037011 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t5k5w\" (UniqueName: \"kubernetes.io/projected/4d7b0043-81f6-4990-80d6-526d3a214ec2-kube-api-access-t5k5w\") pod \"interconnect-operator-78b9bd8798-6v6g2\" (UID: \"4d7b0043-81f6-4990-80d6-526d3a214ec2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" Jan 23 09:07:11 crc kubenswrapper[5117]: I0123 09:07:11.043928 5117 generic.go:358] "Generic (PLEG): container finished" podID="28473e88-fc17-4102-95da-44da0dd3e500" containerID="62a93081cb11f7528378fef5cf379ba2c3f953fc03049523414b49ccf2b4ac14" exitCode=0 Jan 23 09:07:11 crc kubenswrapper[5117]: I0123 09:07:11.044012 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerDied","Data":"62a93081cb11f7528378fef5cf379ba2c3f953fc03049523414b49ccf2b4ac14"} Jan 23 09:07:11 crc kubenswrapper[5117]: I0123 09:07:11.059233 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t5k5w\" (UniqueName: \"kubernetes.io/projected/4d7b0043-81f6-4990-80d6-526d3a214ec2-kube-api-access-t5k5w\") pod \"interconnect-operator-78b9bd8798-6v6g2\" (UID: \"4d7b0043-81f6-4990-80d6-526d3a214ec2\") " pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" Jan 23 09:07:11 crc kubenswrapper[5117]: I0123 09:07:11.194797 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" Jan 23 09:07:15 crc kubenswrapper[5117]: I0123 09:07:15.882980 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.013124 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-catalog-content\") pod \"28473e88-fc17-4102-95da-44da0dd3e500\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.013207 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-utilities\") pod \"28473e88-fc17-4102-95da-44da0dd3e500\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.013276 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qdvkt\" (UniqueName: \"kubernetes.io/projected/28473e88-fc17-4102-95da-44da0dd3e500-kube-api-access-qdvkt\") pod \"28473e88-fc17-4102-95da-44da0dd3e500\" (UID: \"28473e88-fc17-4102-95da-44da0dd3e500\") " Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.014856 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-utilities" (OuterVolumeSpecName: "utilities") pod "28473e88-fc17-4102-95da-44da0dd3e500" (UID: "28473e88-fc17-4102-95da-44da0dd3e500"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.049650 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28473e88-fc17-4102-95da-44da0dd3e500-kube-api-access-qdvkt" (OuterVolumeSpecName: "kube-api-access-qdvkt") pod "28473e88-fc17-4102-95da-44da0dd3e500" (UID: "28473e88-fc17-4102-95da-44da0dd3e500"). InnerVolumeSpecName "kube-api-access-qdvkt". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.074404 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "28473e88-fc17-4102-95da-44da0dd3e500" (UID: "28473e88-fc17-4102-95da-44da0dd3e500"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.080865 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-2bzps" event={"ID":"28473e88-fc17-4102-95da-44da0dd3e500","Type":"ContainerDied","Data":"b0bb0d0f95b219177ab7b6cf4287742c928a19ca3c7c77557629df329a61f798"} Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.080923 5117 scope.go:117] "RemoveContainer" containerID="62a93081cb11f7528378fef5cf379ba2c3f953fc03049523414b49ccf2b4ac14" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.081079 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-2bzps" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.115931 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.115991 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/28473e88-fc17-4102-95da-44da0dd3e500-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.116005 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qdvkt\" (UniqueName: \"kubernetes.io/projected/28473e88-fc17-4102-95da-44da0dd3e500-kube-api-access-qdvkt\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.118951 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-2bzps"] Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.123085 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-2bzps"] Jan 23 09:07:16 crc kubenswrapper[5117]: I0123 09:07:16.778546 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28473e88-fc17-4102-95da-44da0dd3e500" path="/var/lib/kubelet/pods/28473e88-fc17-4102-95da-44da0dd3e500/volumes" Jan 23 09:07:23 crc kubenswrapper[5117]: I0123 09:07:23.432532 5117 scope.go:117] "RemoveContainer" containerID="0db5ed479c9d76bbdbf4ee968edc685597c351dc58354439310b9fa1d040075b" Jan 23 09:07:23 crc kubenswrapper[5117]: I0123 09:07:23.665680 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/interconnect-operator-78b9bd8798-6v6g2"] Jan 23 09:07:24 crc kubenswrapper[5117]: I0123 09:07:24.658218 5117 scope.go:117] "RemoveContainer" containerID="d091efc4a02b96eebbe1926f3ab77a4484064d97a4e899dec9cc526f81bae08d" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.157279 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" event={"ID":"e628d153-ac5d-4e3f-8b44-e30ada549a31","Type":"ContainerStarted","Data":"39b611106132472bbe1cb349b96bc3b44c0a3196dd70ece6fd42ecae8b4aeb66"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.157621 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.158599 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" event={"ID":"b2f00157-5589-44da-862e-a7686842803f","Type":"ContainerStarted","Data":"7678304b196c9182901fa00d517fcfeed203ae394466165658c2989dd00519ce"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.159182 5117 patch_prober.go:28] interesting pod/observability-operator-85c68dddb-4cs2d container/operator namespace/openshift-operators: Readiness probe status=failure output="Get \"http://10.217.0.49:8081/healthz\": dial tcp 10.217.0.49:8081: connect: connection refused" start-of-body= Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.159242 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" podUID="e628d153-ac5d-4e3f-8b44-e30ada549a31" containerName="operator" probeResult="failure" output="Get \"http://10.217.0.49:8081/healthz\": dial tcp 10.217.0.49:8081: connect: connection refused" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.160687 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" event={"ID":"0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052","Type":"ContainerStarted","Data":"7432c57786408f5550ecebe67e24b53ffc77ab7c6d17ef5aecbdda25f4b9c211"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.161688 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" event={"ID":"4d7b0043-81f6-4990-80d6-526d3a214ec2","Type":"ContainerStarted","Data":"c451901b6ce985a3eee0baeb6c03df3e2f107e5eebb066a3b379ca2622ccdf01"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.163720 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" event={"ID":"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c","Type":"ContainerStarted","Data":"134a79b9d750a54be20acb8dd56b8603d732dc5e91a63c2caafe1c63f74a1a33"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.165374 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" event={"ID":"bb3c5e0d-73e6-4766-9f06-dc44d438a55c","Type":"ContainerStarted","Data":"b58ff53b9cf750744c0febbedcd9d4e13fcb4d838628d2d1cd0ddfe2fc53835f"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.165467 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.166821 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" event={"ID":"d3846cc0-7f86-492a-a1df-8d92f3fd5094","Type":"ContainerStarted","Data":"0fc506631b72e383fa170689e5e7d32982625fa9b21d3e61e692a5a6269b15f1"} Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.189606 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" podStartSLOduration=1.515240862 podStartE2EDuration="22.189588749s" podCreationTimestamp="2026-01-23 09:07:03 +0000 UTC" firstStartedPulling="2026-01-23 09:07:03.984634769 +0000 UTC m=+835.740759795" lastFinishedPulling="2026-01-23 09:07:24.658982666 +0000 UTC m=+856.415107682" observedRunningTime="2026-01-23 09:07:25.188414768 +0000 UTC m=+856.944539804" watchObservedRunningTime="2026-01-23 09:07:25.189588749 +0000 UTC m=+856.945713775" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.213921 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4" podStartSLOduration=1.39300822 podStartE2EDuration="22.213903993s" podCreationTimestamp="2026-01-23 09:07:03 +0000 UTC" firstStartedPulling="2026-01-23 09:07:03.830456761 +0000 UTC m=+835.586581787" lastFinishedPulling="2026-01-23 09:07:24.651352534 +0000 UTC m=+856.407477560" observedRunningTime="2026-01-23 09:07:25.210997846 +0000 UTC m=+856.967122872" watchObservedRunningTime="2026-01-23 09:07:25.213903993 +0000 UTC m=+856.970029019" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.291744 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2" podStartSLOduration=3.594895785 podStartE2EDuration="22.291726814s" podCreationTimestamp="2026-01-23 09:07:03 +0000 UTC" firstStartedPulling="2026-01-23 09:07:03.872651829 +0000 UTC m=+835.628776855" lastFinishedPulling="2026-01-23 09:07:22.569482858 +0000 UTC m=+854.325607884" observedRunningTime="2026-01-23 09:07:25.266751383 +0000 UTC m=+857.022876409" watchObservedRunningTime="2026-01-23 09:07:25.291726814 +0000 UTC m=+857.047851840" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.292228 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" podStartSLOduration=1.6951537779999999 podStartE2EDuration="22.292220917s" podCreationTimestamp="2026-01-23 09:07:03 +0000 UTC" firstStartedPulling="2026-01-23 09:07:04.061330062 +0000 UTC m=+835.817455078" lastFinishedPulling="2026-01-23 09:07:24.658397201 +0000 UTC m=+856.414522217" observedRunningTime="2026-01-23 09:07:25.289505505 +0000 UTC m=+857.045630531" watchObservedRunningTime="2026-01-23 09:07:25.292220917 +0000 UTC m=+857.048345943" Jan 23 09:07:25 crc kubenswrapper[5117]: I0123 09:07:25.321458 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operators/obo-prometheus-operator-9bc85b4bf-kzqgs" podStartSLOduration=3.41372375 podStartE2EDuration="23.321439751s" podCreationTimestamp="2026-01-23 09:07:02 +0000 UTC" firstStartedPulling="2026-01-23 09:07:03.524744522 +0000 UTC m=+835.280869548" lastFinishedPulling="2026-01-23 09:07:23.432460523 +0000 UTC m=+855.188585549" observedRunningTime="2026-01-23 09:07:25.313425989 +0000 UTC m=+857.069551005" watchObservedRunningTime="2026-01-23 09:07:25.321439751 +0000 UTC m=+857.077564777" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.176608 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elastic-operator-57956977b8-t9brt" event={"ID":"d0caf2c4-ab74-48d9-aba5-6aa4d4ba6425","Type":"ContainerStarted","Data":"b89ee79850b435db5a8c1608c679a550861ed53a00cb83fae7c876d9fc3277e5"} Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.181552 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerID="134a79b9d750a54be20acb8dd56b8603d732dc5e91a63c2caafe1c63f74a1a33" exitCode=0 Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.181652 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" event={"ID":"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c","Type":"ContainerDied","Data":"134a79b9d750a54be20acb8dd56b8603d732dc5e91a63c2caafe1c63f74a1a33"} Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.183194 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/observability-operator-85c68dddb-4cs2d" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.204218 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elastic-operator-57956977b8-t9brt" podStartSLOduration=3.099513466 podStartE2EDuration="17.204202491s" podCreationTimestamp="2026-01-23 09:07:09 +0000 UTC" firstStartedPulling="2026-01-23 09:07:10.63427805 +0000 UTC m=+842.390403076" lastFinishedPulling="2026-01-23 09:07:24.738967075 +0000 UTC m=+856.495092101" observedRunningTime="2026-01-23 09:07:26.198833939 +0000 UTC m=+857.954958965" watchObservedRunningTime="2026-01-23 09:07:26.204202491 +0000 UTC m=+857.960327517" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.321114 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.321966 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="extract-utilities" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.321982 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="extract-utilities" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.322002 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="extract-content" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.322012 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="extract-content" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.322042 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="registry-server" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.322051 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="registry-server" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.322183 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="28473e88-fc17-4102-95da-44da0dd3e500" containerName="registry-server" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.328756 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.333569 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-config\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.333972 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-remote-ca\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.334063 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-unicast-hosts\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.334411 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-http-certs-internal\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.334445 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-internal-users\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.334919 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-xpack-file-realm\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.335085 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-dockercfg-jtr66\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.335351 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"elasticsearch-es-scripts\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.335403 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-default-es-transport-certs\"" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362505 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362581 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/0e58c26d-fd70-4f32-b491-ae446b4c1769-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362610 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362639 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362659 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362701 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362724 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362743 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362774 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362805 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362827 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362851 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362892 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362922 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.362958 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.363204 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463713 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463767 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/0e58c26d-fd70-4f32-b491-ae446b4c1769-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463792 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463863 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463916 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463960 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.463991 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464012 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464053 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464089 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464111 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464177 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464210 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464243 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464275 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464379 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-bin-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-bin-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.464655 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-plugins-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-plugins-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.465368 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-logs\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elasticsearch-logs\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.465422 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp-volume\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-tmp-volume\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.466025 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-scripts\" (UniqueName: \"kubernetes.io/configmap/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-scripts\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.466389 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config-local\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-config-local\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.466474 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elasticsearch-data\" (UniqueName: \"kubernetes.io/empty-dir/0e58c26d-fd70-4f32-b491-ae446b4c1769-elasticsearch-data\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.467515 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-unicast-hosts\" (UniqueName: \"kubernetes.io/configmap/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-unicast-hosts\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.471928 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"downward-api\" (UniqueName: \"kubernetes.io/downward-api/0e58c26d-fd70-4f32-b491-ae446b4c1769-downward-api\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.480723 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-remote-certificate-authorities\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-remote-certificate-authorities\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.484554 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-http-certificates\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-http-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.499336 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-probe-user\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-probe-user\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.500794 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-transport-certificates\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-transport-certificates\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.501723 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-elasticsearch-config\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-elasticsearch-config\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.510984 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-internal-xpack-file-realm\" (UniqueName: \"kubernetes.io/secret/0e58c26d-fd70-4f32-b491-ae446b4c1769-elastic-internal-xpack-file-realm\") pod \"elasticsearch-es-default-0\" (UID: \"0e58c26d-fd70-4f32-b491-ae446b4c1769\") " pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:26 crc kubenswrapper[5117]: I0123 09:07:26.679419 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:07:27 crc kubenswrapper[5117]: I0123 09:07:27.016260 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 09:07:27 crc kubenswrapper[5117]: W0123 09:07:27.025534 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod0e58c26d_fd70_4f32_b491_ae446b4c1769.slice/crio-2f994ce56fa6bd0b5cc2529ab1ef5c0ce60d7ddc186d5bf9dc42443017fa8235 WatchSource:0}: Error finding container 2f994ce56fa6bd0b5cc2529ab1ef5c0ce60d7ddc186d5bf9dc42443017fa8235: Status 404 returned error can't find the container with id 2f994ce56fa6bd0b5cc2529ab1ef5c0ce60d7ddc186d5bf9dc42443017fa8235 Jan 23 09:07:27 crc kubenswrapper[5117]: I0123 09:07:27.190179 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"0e58c26d-fd70-4f32-b491-ae446b4c1769","Type":"ContainerStarted","Data":"2f994ce56fa6bd0b5cc2529ab1ef5c0ce60d7ddc186d5bf9dc42443017fa8235"} Jan 23 09:07:27 crc kubenswrapper[5117]: I0123 09:07:27.193669 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerID="809288dc14466ab53735f803ddc4152ccb5fd3248dc7fb73713073a88c8a1639" exitCode=0 Jan 23 09:07:27 crc kubenswrapper[5117]: I0123 09:07:27.193774 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" event={"ID":"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c","Type":"ContainerDied","Data":"809288dc14466ab53735f803ddc4152ccb5fd3248dc7fb73713073a88c8a1639"} Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.564182 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.597120 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-util\") pod \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.597216 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-bundle\") pod \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.597406 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-m9dhb\" (UniqueName: \"kubernetes.io/projected/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-kube-api-access-m9dhb\") pod \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\" (UID: \"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c\") " Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.598795 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-bundle" (OuterVolumeSpecName: "bundle") pod "2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" (UID: "2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.605951 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-util" (OuterVolumeSpecName: "util") pod "2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" (UID: "2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.631408 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-kube-api-access-m9dhb" (OuterVolumeSpecName: "kube-api-access-m9dhb") pod "2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" (UID: "2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c"). InnerVolumeSpecName "kube-api-access-m9dhb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.698362 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-m9dhb\" (UniqueName: \"kubernetes.io/projected/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-kube-api-access-m9dhb\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.698409 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:28 crc kubenswrapper[5117]: I0123 09:07:28.698418 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:07:29 crc kubenswrapper[5117]: I0123 09:07:29.226611 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" event={"ID":"2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c","Type":"ContainerDied","Data":"26b741efbaba153cee080fef1d0a8a2d799a4f5d5eba15407d6398efb242098a"} Jan 23 09:07:29 crc kubenswrapper[5117]: I0123 09:07:29.226669 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="26b741efbaba153cee080fef1d0a8a2d799a4f5d5eba15407d6398efb242098a" Jan 23 09:07:29 crc kubenswrapper[5117]: I0123 09:07:29.226669 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z" Jan 23 09:07:36 crc kubenswrapper[5117]: I0123 09:07:36.185051 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-operators/perses-operator-669c9f96b5-z7w5m" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.631869 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct"] Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633241 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="util" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633256 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="util" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633272 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="pull" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633277 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="pull" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633296 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="extract" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633303 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="extract" Jan 23 09:07:40 crc kubenswrapper[5117]: I0123 09:07:40.633405 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c" containerName="extract" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.025734 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.028642 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager-operator\"/\"cert-manager-operator-controller-manager-dockercfg-v2nq8\"" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.029281 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"openshift-service-ca.crt\"" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.029434 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager-operator\"/\"kube-root-ca.crt\"" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.046659 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct"] Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.095402 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/958e4707-8c40-454d-a33a-d82778c4e9b5-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-749ct\" (UID: \"958e4707-8c40-454d-a33a-d82778c4e9b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.095579 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-4fpcm\" (UniqueName: \"kubernetes.io/projected/958e4707-8c40-454d-a33a-d82778c4e9b5-kube-api-access-4fpcm\") pod \"cert-manager-operator-controller-manager-64c74584c4-749ct\" (UID: \"958e4707-8c40-454d-a33a-d82778c4e9b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.196616 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-4fpcm\" (UniqueName: \"kubernetes.io/projected/958e4707-8c40-454d-a33a-d82778c4e9b5-kube-api-access-4fpcm\") pod \"cert-manager-operator-controller-manager-64c74584c4-749ct\" (UID: \"958e4707-8c40-454d-a33a-d82778c4e9b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.196730 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/958e4707-8c40-454d-a33a-d82778c4e9b5-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-749ct\" (UID: \"958e4707-8c40-454d-a33a-d82778c4e9b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.197311 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tmp\" (UniqueName: \"kubernetes.io/empty-dir/958e4707-8c40-454d-a33a-d82778c4e9b5-tmp\") pod \"cert-manager-operator-controller-manager-64c74584c4-749ct\" (UID: \"958e4707-8c40-454d-a33a-d82778c4e9b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.215619 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-4fpcm\" (UniqueName: \"kubernetes.io/projected/958e4707-8c40-454d-a33a-d82778c4e9b5-kube-api-access-4fpcm\") pod \"cert-manager-operator-controller-manager-64c74584c4-749ct\" (UID: \"958e4707-8c40-454d-a33a-d82778c4e9b5\") " pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:07:41 crc kubenswrapper[5117]: I0123 09:07:41.340852 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" Jan 23 09:08:00 crc kubenswrapper[5117]: I0123 09:08:00.127519 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485988-bvkjd"] Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.163167 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.165754 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.166299 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.166998 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.170303 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485988-bvkjd"] Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.261898 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vnz6v\" (UniqueName: \"kubernetes.io/projected/6cd27b8c-4532-4d81-8e54-04c774c10f77-kube-api-access-vnz6v\") pod \"auto-csr-approver-29485988-bvkjd\" (UID: \"6cd27b8c-4532-4d81-8e54-04c774c10f77\") " pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.363360 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vnz6v\" (UniqueName: \"kubernetes.io/projected/6cd27b8c-4532-4d81-8e54-04c774c10f77-kube-api-access-vnz6v\") pod \"auto-csr-approver-29485988-bvkjd\" (UID: \"6cd27b8c-4532-4d81-8e54-04c774c10f77\") " pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.384922 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vnz6v\" (UniqueName: \"kubernetes.io/projected/6cd27b8c-4532-4d81-8e54-04c774c10f77-kube-api-access-vnz6v\") pod \"auto-csr-approver-29485988-bvkjd\" (UID: \"6cd27b8c-4532-4d81-8e54-04c774c10f77\") " pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.487903 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:01 crc kubenswrapper[5117]: I0123 09:08:01.825430 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.470760 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.470996 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.472795 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.473084 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-global-ca\"" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.474069 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-sys-config\"" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.474214 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-1-ca\"" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579018 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579064 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579180 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579214 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579247 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dgfl9\" (UniqueName: \"kubernetes.io/projected/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-kube-api-access-dgfl9\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579272 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579399 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579446 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579517 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579563 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579603 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.579622 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681095 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681193 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681221 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681246 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681196 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildcachedir\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681273 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681351 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681415 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681455 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681490 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681549 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681579 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildworkdir\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681640 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681752 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dgfl9\" (UniqueName: \"kubernetes.io/projected/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-kube-api-access-dgfl9\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681878 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-root\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.681944 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-node-pullsecrets\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.682062 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-proxy-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.682085 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-blob-cache\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.682065 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-system-configs\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.682103 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-run\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.682717 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-ca-bundles\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.685895 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.686013 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.698068 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dgfl9\" (UniqueName: \"kubernetes.io/projected/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-kube-api-access-dgfl9\") pod \"service-telemetry-operator-1-build\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:02 crc kubenswrapper[5117]: I0123 09:08:02.788100 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.174301 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485988-bvkjd"] Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.449056 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" event={"ID":"4d7b0043-81f6-4990-80d6-526d3a214ec2","Type":"ContainerStarted","Data":"0dd63cedb9d5919de32c6fc184282d9c4f6aced3abcaec0df29a534edc316b4d"} Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.450641 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" event={"ID":"6cd27b8c-4532-4d81-8e54-04c774c10f77","Type":"ContainerStarted","Data":"8da9aa505eddb9c02f1128cf3f752b6f91dd389a9eb9051458ddaeed3f1a60bf"} Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.451837 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"0e58c26d-fd70-4f32-b491-ae446b4c1769","Type":"ContainerStarted","Data":"5a30610818e8168b6c4b382dc38f38742afb0cd6ae58b98669f97be77fe83e5e"} Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.466987 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct"] Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.473017 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/interconnect-operator-78b9bd8798-6v6g2" podStartSLOduration=18.208987597 podStartE2EDuration="54.472997505s" podCreationTimestamp="2026-01-23 09:07:10 +0000 UTC" firstStartedPulling="2026-01-23 09:07:24.655969556 +0000 UTC m=+856.412094583" lastFinishedPulling="2026-01-23 09:08:00.919979465 +0000 UTC m=+892.676104491" observedRunningTime="2026-01-23 09:08:04.465305351 +0000 UTC m=+896.221430377" watchObservedRunningTime="2026-01-23 09:08:04.472997505 +0000 UTC m=+896.229122551" Jan 23 09:08:04 crc kubenswrapper[5117]: W0123 09:08:04.482316 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod958e4707_8c40_454d_a33a_d82778c4e9b5.slice/crio-c34402e08cda30084f211b6894a41ed10bdba0d621554d5011f701adc888ba1f WatchSource:0}: Error finding container c34402e08cda30084f211b6894a41ed10bdba0d621554d5011f701adc888ba1f: Status 404 returned error can't find the container with id c34402e08cda30084f211b6894a41ed10bdba0d621554d5011f701adc888ba1f Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.482411 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.682690 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 09:08:04 crc kubenswrapper[5117]: I0123 09:08:04.710306 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/elasticsearch-es-default-0"] Jan 23 09:08:05 crc kubenswrapper[5117]: I0123 09:08:05.477922 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" event={"ID":"958e4707-8c40-454d-a33a-d82778c4e9b5","Type":"ContainerStarted","Data":"c34402e08cda30084f211b6894a41ed10bdba0d621554d5011f701adc888ba1f"} Jan 23 09:08:05 crc kubenswrapper[5117]: I0123 09:08:05.482010 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"588be456-8b6b-40d4-a9b7-dbfe4bc808bc","Type":"ContainerStarted","Data":"c790f15ace21498752b744ce06444a4215a190edc1dc305af6defcdd0eb9b005"} Jan 23 09:08:05 crc kubenswrapper[5117]: I0123 09:08:05.487769 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" event={"ID":"6cd27b8c-4532-4d81-8e54-04c774c10f77","Type":"ContainerStarted","Data":"089159a1d65087f5620429ba04d4f9e6b07cd65fbb29858b13b431545e678346"} Jan 23 09:08:05 crc kubenswrapper[5117]: I0123 09:08:05.512412 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" podStartSLOduration=4.710525496 podStartE2EDuration="5.512389463s" podCreationTimestamp="2026-01-23 09:08:00 +0000 UTC" firstStartedPulling="2026-01-23 09:08:04.185363197 +0000 UTC m=+895.941488223" lastFinishedPulling="2026-01-23 09:08:04.987227164 +0000 UTC m=+896.743352190" observedRunningTime="2026-01-23 09:08:05.506804485 +0000 UTC m=+897.262929521" watchObservedRunningTime="2026-01-23 09:08:05.512389463 +0000 UTC m=+897.268514509" Jan 23 09:08:06 crc kubenswrapper[5117]: I0123 09:08:06.500436 5117 generic.go:358] "Generic (PLEG): container finished" podID="0e58c26d-fd70-4f32-b491-ae446b4c1769" containerID="5a30610818e8168b6c4b382dc38f38742afb0cd6ae58b98669f97be77fe83e5e" exitCode=0 Jan 23 09:08:06 crc kubenswrapper[5117]: I0123 09:08:06.500533 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"0e58c26d-fd70-4f32-b491-ae446b4c1769","Type":"ContainerDied","Data":"5a30610818e8168b6c4b382dc38f38742afb0cd6ae58b98669f97be77fe83e5e"} Jan 23 09:08:06 crc kubenswrapper[5117]: I0123 09:08:06.503031 5117 generic.go:358] "Generic (PLEG): container finished" podID="6cd27b8c-4532-4d81-8e54-04c774c10f77" containerID="089159a1d65087f5620429ba04d4f9e6b07cd65fbb29858b13b431545e678346" exitCode=0 Jan 23 09:08:06 crc kubenswrapper[5117]: I0123 09:08:06.503108 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" event={"ID":"6cd27b8c-4532-4d81-8e54-04c774c10f77","Type":"ContainerDied","Data":"089159a1d65087f5620429ba04d4f9e6b07cd65fbb29858b13b431545e678346"} Jan 23 09:08:07 crc kubenswrapper[5117]: I0123 09:08:07.516221 5117 generic.go:358] "Generic (PLEG): container finished" podID="0e58c26d-fd70-4f32-b491-ae446b4c1769" containerID="a44213a4e7927db24eb65497198a277c75b8f1de96365c794c7cac2914a72027" exitCode=0 Jan 23 09:08:07 crc kubenswrapper[5117]: I0123 09:08:07.516305 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"0e58c26d-fd70-4f32-b491-ae446b4c1769","Type":"ContainerDied","Data":"a44213a4e7927db24eb65497198a277c75b8f1de96365c794c7cac2914a72027"} Jan 23 09:08:11 crc kubenswrapper[5117]: I0123 09:08:11.877040 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 09:08:13 crc kubenswrapper[5117]: I0123 09:08:13.943653 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.133345 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.133477 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.134903 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-sys-config\"" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.135420 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-ca\"" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.135454 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-2-global-ca\"" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295303 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295350 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295384 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295477 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295507 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295536 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295592 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295638 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295664 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295695 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-dmpls\" (UniqueName: \"kubernetes.io/projected/6c948705-ea0a-48e2-b9a3-74dbc4562c72-kube-api-access-dmpls\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295722 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.295758 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.396718 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.396767 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-dmpls\" (UniqueName: \"kubernetes.io/projected/6c948705-ea0a-48e2-b9a3-74dbc4562c72-kube-api-access-dmpls\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.396798 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.396978 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397163 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397183 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildcachedir\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397251 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397356 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-system-configs\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397369 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397443 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397502 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397542 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397688 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397739 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.397833 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-node-pullsecrets\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.398036 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-proxy-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.398946 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-run\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.399278 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-blob-cache\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.399421 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildworkdir\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.399825 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-ca-bundles\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.399931 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-root\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.404343 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.404725 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.415815 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-dmpls\" (UniqueName: \"kubernetes.io/projected/6c948705-ea0a-48e2-b9a3-74dbc4562c72-kube-api-access-dmpls\") pod \"service-telemetry-operator-2-build\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:14 crc kubenswrapper[5117]: I0123 09:08:14.450828 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.093307 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.212638 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-vnz6v\" (UniqueName: \"kubernetes.io/projected/6cd27b8c-4532-4d81-8e54-04c774c10f77-kube-api-access-vnz6v\") pod \"6cd27b8c-4532-4d81-8e54-04c774c10f77\" (UID: \"6cd27b8c-4532-4d81-8e54-04c774c10f77\") " Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.223839 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6cd27b8c-4532-4d81-8e54-04c774c10f77-kube-api-access-vnz6v" (OuterVolumeSpecName: "kube-api-access-vnz6v") pod "6cd27b8c-4532-4d81-8e54-04c774c10f77" (UID: "6cd27b8c-4532-4d81-8e54-04c774c10f77"). InnerVolumeSpecName "kube-api-access-vnz6v". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.241576 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.241842 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.258876 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.260069 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.314861 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-vnz6v\" (UniqueName: \"kubernetes.io/projected/6cd27b8c-4532-4d81-8e54-04c774c10f77-kube-api-access-vnz6v\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.555803 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-2-build"] Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.582708 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" event={"ID":"958e4707-8c40-454d-a33a-d82778c4e9b5","Type":"ContainerStarted","Data":"70823d149a891467230c928aae36b0badeeccf574373b07c9a1d2eb78389f256"} Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.585856 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/elasticsearch-es-default-0" event={"ID":"0e58c26d-fd70-4f32-b491-ae446b4c1769","Type":"ContainerStarted","Data":"c8fe22829d5ea5eb84da779f669340a50d13dc63808e6379ef45fc8208b91e22"} Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.586724 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.591296 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerStarted","Data":"b67e8a2ace26db89d862c84d6e30d01e42d44651dd8665e01ce13376fcc77de4"} Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.593499 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.593541 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485988-bvkjd" event={"ID":"6cd27b8c-4532-4d81-8e54-04c774c10f77","Type":"ContainerDied","Data":"8da9aa505eddb9c02f1128cf3f752b6f91dd389a9eb9051458ddaeed3f1a60bf"} Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.593577 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8da9aa505eddb9c02f1128cf3f752b6f91dd389a9eb9051458ddaeed3f1a60bf" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.608792 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager-operator/cert-manager-operator-controller-manager-64c74584c4-749ct" podStartSLOduration=24.943688011 podStartE2EDuration="35.608776139s" podCreationTimestamp="2026-01-23 09:07:40 +0000 UTC" firstStartedPulling="2026-01-23 09:08:04.485921957 +0000 UTC m=+896.242046983" lastFinishedPulling="2026-01-23 09:08:15.151010085 +0000 UTC m=+906.907135111" observedRunningTime="2026-01-23 09:08:15.602810001 +0000 UTC m=+907.358935037" watchObservedRunningTime="2026-01-23 09:08:15.608776139 +0000 UTC m=+907.364901165" Jan 23 09:08:15 crc kubenswrapper[5117]: I0123 09:08:15.641291 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/elasticsearch-es-default-0" podStartSLOduration=12.583435027 podStartE2EDuration="49.641271759s" podCreationTimestamp="2026-01-23 09:07:26 +0000 UTC" firstStartedPulling="2026-01-23 09:07:27.040211882 +0000 UTC m=+858.796336908" lastFinishedPulling="2026-01-23 09:08:04.098048614 +0000 UTC m=+895.854173640" observedRunningTime="2026-01-23 09:08:15.635416714 +0000 UTC m=+907.391541750" watchObservedRunningTime="2026-01-23 09:08:15.641271759 +0000 UTC m=+907.397396785" Jan 23 09:08:16 crc kubenswrapper[5117]: I0123 09:08:16.180999 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485982-bp52j"] Jan 23 09:08:16 crc kubenswrapper[5117]: I0123 09:08:16.186394 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485982-bp52j"] Jan 23 09:08:16 crc kubenswrapper[5117]: I0123 09:08:16.603498 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerStarted","Data":"34fc1d153b3803796c9fe82a42ff6e092ece5fcf72a0aefe932520a163e3a02c"} Jan 23 09:08:16 crc kubenswrapper[5117]: I0123 09:08:16.608353 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/service-telemetry-operator-1-build" podUID="588be456-8b6b-40d4-a9b7-dbfe4bc808bc" containerName="manage-dockerfile" containerID="cri-o://23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc" gracePeriod=30 Jan 23 09:08:16 crc kubenswrapper[5117]: I0123 09:08:16.608908 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"588be456-8b6b-40d4-a9b7-dbfe4bc808bc","Type":"ContainerStarted","Data":"23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc"} Jan 23 09:08:16 crc kubenswrapper[5117]: I0123 09:08:16.778973 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6fdf379f-c969-4262-aa1c-159c476a2e9f" path="/var/lib/kubelet/pods/6fdf379f-c969-4262-aa1c-159c476a2e9f/volumes" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.274926 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_588be456-8b6b-40d4-a9b7-dbfe4bc808bc/manage-dockerfile/0.log" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.275012 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445184 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildworkdir\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445245 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-push\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445288 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-proxy-ca-bundles\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445328 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-node-pullsecrets\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445354 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-root\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445404 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-pull\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445442 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-run\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445473 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-ca-bundles\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445486 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildcachedir\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445555 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-system-configs\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445583 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-blob-cache\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445633 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dgfl9\" (UniqueName: \"kubernetes.io/projected/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-kube-api-access-dgfl9\") pod \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\" (UID: \"588be456-8b6b-40d4-a9b7-dbfe4bc808bc\") " Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445821 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.446101 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445865 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.445909 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.446072 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.446084 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.446569 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.446720 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.446809 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.454321 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.474291 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-kube-api-access-dgfl9" (OuterVolumeSpecName: "kube-api-access-dgfl9") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "kube-api-access-dgfl9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.474283 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "588be456-8b6b-40d4-a9b7-dbfe4bc808bc" (UID: "588be456-8b6b-40d4-a9b7-dbfe4bc808bc"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547779 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547818 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547830 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547840 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547851 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547861 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547871 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dgfl9\" (UniqueName: \"kubernetes.io/projected/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-kube-api-access-dgfl9\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547883 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547895 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547905 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547915 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.547926 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/588be456-8b6b-40d4-a9b7-dbfe4bc808bc-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.614199 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-1-build_588be456-8b6b-40d4-a9b7-dbfe4bc808bc/manage-dockerfile/0.log" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.614250 5117 generic.go:358] "Generic (PLEG): container finished" podID="588be456-8b6b-40d4-a9b7-dbfe4bc808bc" containerID="23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc" exitCode=1 Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.614471 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"588be456-8b6b-40d4-a9b7-dbfe4bc808bc","Type":"ContainerDied","Data":"23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc"} Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.614587 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-1-build" event={"ID":"588be456-8b6b-40d4-a9b7-dbfe4bc808bc","Type":"ContainerDied","Data":"c790f15ace21498752b744ce06444a4215a190edc1dc305af6defcdd0eb9b005"} Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.614630 5117 scope.go:117] "RemoveContainer" containerID="23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.614662 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-1-build" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.638928 5117 scope.go:117] "RemoveContainer" containerID="23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc" Jan 23 09:08:17 crc kubenswrapper[5117]: E0123 09:08:17.639251 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc\": container with ID starting with 23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc not found: ID does not exist" containerID="23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.639277 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc"} err="failed to get container status \"23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc\": rpc error: code = NotFound desc = could not find container \"23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc\": container with ID starting with 23188f01addd0d01c06a422feabdb3e083079f49e769bf91cf66e516da217cbc not found: ID does not exist" Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.646079 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 09:08:17 crc kubenswrapper[5117]: I0123 09:08:17.651796 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-1-build"] Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.633863 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-7597f"] Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.634949 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6cd27b8c-4532-4d81-8e54-04c774c10f77" containerName="oc" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.634977 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6cd27b8c-4532-4d81-8e54-04c774c10f77" containerName="oc" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.635013 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="588be456-8b6b-40d4-a9b7-dbfe4bc808bc" containerName="manage-dockerfile" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.635022 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="588be456-8b6b-40d4-a9b7-dbfe4bc808bc" containerName="manage-dockerfile" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.635179 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="6cd27b8c-4532-4d81-8e54-04c774c10f77" containerName="oc" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.635198 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="588be456-8b6b-40d4-a9b7-dbfe4bc808bc" containerName="manage-dockerfile" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.641567 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.643672 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"openshift-service-ca.crt\"" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.644077 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"cert-manager\"/\"kube-root-ca.crt\"" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.645756 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-webhook-dockercfg-rngnm\"" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.646643 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-7597f"] Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.668914 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wtqs5\" (UniqueName: \"kubernetes.io/projected/ba66ec84-7f70-4462-a90a-9d91db534c99-kube-api-access-wtqs5\") pod \"cert-manager-webhook-7894b5b9b4-7597f\" (UID: \"ba66ec84-7f70-4462-a90a-9d91db534c99\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.669050 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba66ec84-7f70-4462-a90a-9d91db534c99-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-7597f\" (UID: \"ba66ec84-7f70-4462-a90a-9d91db534c99\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.771042 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba66ec84-7f70-4462-a90a-9d91db534c99-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-7597f\" (UID: \"ba66ec84-7f70-4462-a90a-9d91db534c99\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.771187 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wtqs5\" (UniqueName: \"kubernetes.io/projected/ba66ec84-7f70-4462-a90a-9d91db534c99-kube-api-access-wtqs5\") pod \"cert-manager-webhook-7894b5b9b4-7597f\" (UID: \"ba66ec84-7f70-4462-a90a-9d91db534c99\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.779102 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="588be456-8b6b-40d4-a9b7-dbfe4bc808bc" path="/var/lib/kubelet/pods/588be456-8b6b-40d4-a9b7-dbfe4bc808bc/volumes" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.803706 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wtqs5\" (UniqueName: \"kubernetes.io/projected/ba66ec84-7f70-4462-a90a-9d91db534c99-kube-api-access-wtqs5\") pod \"cert-manager-webhook-7894b5b9b4-7597f\" (UID: \"ba66ec84-7f70-4462-a90a-9d91db534c99\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.806462 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/ba66ec84-7f70-4462-a90a-9d91db534c99-bound-sa-token\") pod \"cert-manager-webhook-7894b5b9b4-7597f\" (UID: \"ba66ec84-7f70-4462-a90a-9d91db534c99\") " pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:18 crc kubenswrapper[5117]: I0123 09:08:18.955317 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:19 crc kubenswrapper[5117]: I0123 09:08:19.367440 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-webhook-7894b5b9b4-7597f"] Jan 23 09:08:19 crc kubenswrapper[5117]: W0123 09:08:19.377219 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podba66ec84_7f70_4462_a90a_9d91db534c99.slice/crio-c61bf786ae3e3767f8b92fd287697320c1d32bf19fc525881a06e768c23bd673 WatchSource:0}: Error finding container c61bf786ae3e3767f8b92fd287697320c1d32bf19fc525881a06e768c23bd673: Status 404 returned error can't find the container with id c61bf786ae3e3767f8b92fd287697320c1d32bf19fc525881a06e768c23bd673 Jan 23 09:08:19 crc kubenswrapper[5117]: I0123 09:08:19.628168 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" event={"ID":"ba66ec84-7f70-4462-a90a-9d91db534c99","Type":"ContainerStarted","Data":"c61bf786ae3e3767f8b92fd287697320c1d32bf19fc525881a06e768c23bd673"} Jan 23 09:08:20 crc kubenswrapper[5117]: I0123 09:08:20.799499 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn"] Jan 23 09:08:20 crc kubenswrapper[5117]: I0123 09:08:20.808863 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn"] Jan 23 09:08:20 crc kubenswrapper[5117]: I0123 09:08:20.808997 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:20 crc kubenswrapper[5117]: I0123 09:08:20.821507 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-cainjector-dockercfg-9m4rh\"" Jan 23 09:08:20 crc kubenswrapper[5117]: I0123 09:08:20.914035 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e37c906-4fc2-4bfb-ae26-08ce2c858c5f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2jzcn\" (UID: \"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:20 crc kubenswrapper[5117]: I0123 09:08:20.914097 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-8698h\" (UniqueName: \"kubernetes.io/projected/2e37c906-4fc2-4bfb-ae26-08ce2c858c5f-kube-api-access-8698h\") pod \"cert-manager-cainjector-7dbf76d5c8-2jzcn\" (UID: \"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.015602 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e37c906-4fc2-4bfb-ae26-08ce2c858c5f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2jzcn\" (UID: \"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.015677 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-8698h\" (UniqueName: \"kubernetes.io/projected/2e37c906-4fc2-4bfb-ae26-08ce2c858c5f-kube-api-access-8698h\") pod \"cert-manager-cainjector-7dbf76d5c8-2jzcn\" (UID: \"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.035233 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-8698h\" (UniqueName: \"kubernetes.io/projected/2e37c906-4fc2-4bfb-ae26-08ce2c858c5f-kube-api-access-8698h\") pod \"cert-manager-cainjector-7dbf76d5c8-2jzcn\" (UID: \"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.039112 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/2e37c906-4fc2-4bfb-ae26-08ce2c858c5f-bound-sa-token\") pod \"cert-manager-cainjector-7dbf76d5c8-2jzcn\" (UID: \"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f\") " pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.126068 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.559267 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn"] Jan 23 09:08:21 crc kubenswrapper[5117]: I0123 09:08:21.643431 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" event={"ID":"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f","Type":"ContainerStarted","Data":"350e2e345e1a26de061fee5910f697d2914da97b67392518773a1924580d5e42"} Jan 23 09:08:26 crc kubenswrapper[5117]: I0123 09:08:26.671955 5117 generic.go:358] "Generic (PLEG): container finished" podID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerID="34fc1d153b3803796c9fe82a42ff6e092ece5fcf72a0aefe932520a163e3a02c" exitCode=0 Jan 23 09:08:26 crc kubenswrapper[5117]: I0123 09:08:26.672032 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerDied","Data":"34fc1d153b3803796c9fe82a42ff6e092ece5fcf72a0aefe932520a163e3a02c"} Jan 23 09:08:26 crc kubenswrapper[5117]: I0123 09:08:26.690361 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="0e58c26d-fd70-4f32-b491-ae446b4c1769" containerName="elasticsearch" probeResult="failure" output=< Jan 23 09:08:26 crc kubenswrapper[5117]: {"timestamp": "2026-01-23T09:08:26+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 09:08:26 crc kubenswrapper[5117]: > Jan 23 09:08:28 crc kubenswrapper[5117]: I0123 09:08:28.690278 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerStarted","Data":"f913c9b217bb7fb0a29824892d420aaf2d7bad26815b9c6c60751ae74d6983f7"} Jan 23 09:08:31 crc kubenswrapper[5117]: I0123 09:08:31.674745 5117 prober.go:120] "Probe failed" probeType="Readiness" pod="service-telemetry/elasticsearch-es-default-0" podUID="0e58c26d-fd70-4f32-b491-ae446b4c1769" containerName="elasticsearch" probeResult="failure" output=< Jan 23 09:08:31 crc kubenswrapper[5117]: {"timestamp": "2026-01-23T09:08:31+00:00", "message": "readiness probe failed", "curl_rc": "7"} Jan 23 09:08:31 crc kubenswrapper[5117]: > Jan 23 09:08:31 crc kubenswrapper[5117]: I0123 09:08:31.710333 5117 generic.go:358] "Generic (PLEG): container finished" podID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerID="f913c9b217bb7fb0a29824892d420aaf2d7bad26815b9c6c60751ae74d6983f7" exitCode=0 Jan 23 09:08:31 crc kubenswrapper[5117]: I0123 09:08:31.710416 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerDied","Data":"f913c9b217bb7fb0a29824892d420aaf2d7bad26815b9c6c60751ae74d6983f7"} Jan 23 09:08:33 crc kubenswrapper[5117]: I0123 09:08:33.749292 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-2-build_6c948705-ea0a-48e2-b9a3-74dbc4562c72/manage-dockerfile/0.log" Jan 23 09:08:36 crc kubenswrapper[5117]: I0123 09:08:36.823839 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/elasticsearch-es-default-0" Jan 23 09:08:37 crc kubenswrapper[5117]: I0123 09:08:37.725286 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["cert-manager/cert-manager-858d87f86b-rdd8d"] Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.090426 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-rdd8d"] Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.090570 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.093655 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"cert-manager\"/\"cert-manager-dockercfg-wfp4t\"" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.171924 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8247d078-8f58-4aa8-a4b3-cb8c726f45b0-bound-sa-token\") pod \"cert-manager-858d87f86b-rdd8d\" (UID: \"8247d078-8f58-4aa8-a4b3-cb8c726f45b0\") " pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.172007 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-q94nz\" (UniqueName: \"kubernetes.io/projected/8247d078-8f58-4aa8-a4b3-cb8c726f45b0-kube-api-access-q94nz\") pod \"cert-manager-858d87f86b-rdd8d\" (UID: \"8247d078-8f58-4aa8-a4b3-cb8c726f45b0\") " pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.273015 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8247d078-8f58-4aa8-a4b3-cb8c726f45b0-bound-sa-token\") pod \"cert-manager-858d87f86b-rdd8d\" (UID: \"8247d078-8f58-4aa8-a4b3-cb8c726f45b0\") " pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.273096 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-q94nz\" (UniqueName: \"kubernetes.io/projected/8247d078-8f58-4aa8-a4b3-cb8c726f45b0-kube-api-access-q94nz\") pod \"cert-manager-858d87f86b-rdd8d\" (UID: \"8247d078-8f58-4aa8-a4b3-cb8c726f45b0\") " pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.301160 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-q94nz\" (UniqueName: \"kubernetes.io/projected/8247d078-8f58-4aa8-a4b3-cb8c726f45b0-kube-api-access-q94nz\") pod \"cert-manager-858d87f86b-rdd8d\" (UID: \"8247d078-8f58-4aa8-a4b3-cb8c726f45b0\") " pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.301595 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bound-sa-token\" (UniqueName: \"kubernetes.io/projected/8247d078-8f58-4aa8-a4b3-cb8c726f45b0-bound-sa-token\") pod \"cert-manager-858d87f86b-rdd8d\" (UID: \"8247d078-8f58-4aa8-a4b3-cb8c726f45b0\") " pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:38 crc kubenswrapper[5117]: I0123 09:08:38.426275 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="cert-manager/cert-manager-858d87f86b-rdd8d" Jan 23 09:08:40 crc kubenswrapper[5117]: I0123 09:08:40.584317 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["cert-manager/cert-manager-858d87f86b-rdd8d"] Jan 23 09:08:41 crc kubenswrapper[5117]: I0123 09:08:41.794664 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-rdd8d" event={"ID":"8247d078-8f58-4aa8-a4b3-cb8c726f45b0","Type":"ContainerStarted","Data":"8d957a97bd985599e7283c70a00f75b5b328d0d4f252c963e178655279e85a1d"} Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.802757 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" event={"ID":"2e37c906-4fc2-4bfb-ae26-08ce2c858c5f","Type":"ContainerStarted","Data":"e4e62beaad901a9023a4b5a5c43d3751142c83e8e0a35d426c6a305733cce13c"} Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.804673 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-858d87f86b-rdd8d" event={"ID":"8247d078-8f58-4aa8-a4b3-cb8c726f45b0","Type":"ContainerStarted","Data":"9ebce5bc09243d05722d9b05646c5fe2e09f48d7e33a93656815a97372e190ca"} Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.807130 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerStarted","Data":"08b15dbe7ce8bcd92de1c27a32b398bfef656153858c0d8fa3066263cec9d3f2"} Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.809490 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" event={"ID":"ba66ec84-7f70-4462-a90a-9d91db534c99","Type":"ContainerStarted","Data":"fa313068ce61afbc14fa9dba52b67087943bbf1d7864da60a32cea8347da60b4"} Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.809553 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.826671 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-cainjector-7dbf76d5c8-2jzcn" podStartSLOduration=2.551917746 podStartE2EDuration="22.826653841s" podCreationTimestamp="2026-01-23 09:08:20 +0000 UTC" firstStartedPulling="2026-01-23 09:08:21.591704322 +0000 UTC m=+913.347829348" lastFinishedPulling="2026-01-23 09:08:41.866440427 +0000 UTC m=+933.622565443" observedRunningTime="2026-01-23 09:08:42.821115371 +0000 UTC m=+934.577240407" watchObservedRunningTime="2026-01-23 09:08:42.826653841 +0000 UTC m=+934.582778877" Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.842556 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-858d87f86b-rdd8d" podStartSLOduration=5.84254122 podStartE2EDuration="5.84254122s" podCreationTimestamp="2026-01-23 09:08:37 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:42.842082937 +0000 UTC m=+934.598207963" watchObservedRunningTime="2026-01-23 09:08:42.84254122 +0000 UTC m=+934.598666246" Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.902979 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" podStartSLOduration=2.4939581410000002 podStartE2EDuration="24.902962356s" podCreationTimestamp="2026-01-23 09:08:18 +0000 UTC" firstStartedPulling="2026-01-23 09:08:19.381314302 +0000 UTC m=+911.137439328" lastFinishedPulling="2026-01-23 09:08:41.790318517 +0000 UTC m=+933.546443543" observedRunningTime="2026-01-23 09:08:42.899530757 +0000 UTC m=+934.655655783" watchObservedRunningTime="2026-01-23 09:08:42.902962356 +0000 UTC m=+934.659087392" Jan 23 09:08:42 crc kubenswrapper[5117]: I0123 09:08:42.903566 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-2-build" podStartSLOduration=29.903560854 podStartE2EDuration="29.903560854s" podCreationTimestamp="2026-01-23 09:08:13 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:08:42.881895648 +0000 UTC m=+934.638020694" watchObservedRunningTime="2026-01-23 09:08:42.903560854 +0000 UTC m=+934.659685880" Jan 23 09:08:45 crc kubenswrapper[5117]: I0123 09:08:45.063413 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:08:45 crc kubenswrapper[5117]: I0123 09:08:45.063524 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:08:48 crc kubenswrapper[5117]: I0123 09:08:48.817850 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="cert-manager/cert-manager-webhook-7894b5b9b4-7597f" Jan 23 09:09:15 crc kubenswrapper[5117]: I0123 09:09:15.049051 5117 scope.go:117] "RemoveContainer" containerID="a79f864d8e4a0e465084f4083835d81bf3a949df7f3b25859cabcec4a25fe78f" Jan 23 09:09:15 crc kubenswrapper[5117]: I0123 09:09:15.067405 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:09:15 crc kubenswrapper[5117]: I0123 09:09:15.067573 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:09:45 crc kubenswrapper[5117]: I0123 09:09:45.063707 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:09:45 crc kubenswrapper[5117]: I0123 09:09:45.064368 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:09:45 crc kubenswrapper[5117]: I0123 09:09:45.064445 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:09:45 crc kubenswrapper[5117]: I0123 09:09:45.065246 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"cca1814ea22f487f6803da65dbe0a07d6e9a455a9d99b67f2cffa31f9de502dd"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:09:45 crc kubenswrapper[5117]: I0123 09:09:45.065325 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://cca1814ea22f487f6803da65dbe0a07d6e9a455a9d99b67f2cffa31f9de502dd" gracePeriod=600 Jan 23 09:09:45 crc kubenswrapper[5117]: I0123 09:09:45.783726 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:09:46 crc kubenswrapper[5117]: I0123 09:09:46.235576 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="cca1814ea22f487f6803da65dbe0a07d6e9a455a9d99b67f2cffa31f9de502dd" exitCode=0 Jan 23 09:09:46 crc kubenswrapper[5117]: I0123 09:09:46.235799 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"cca1814ea22f487f6803da65dbe0a07d6e9a455a9d99b67f2cffa31f9de502dd"} Jan 23 09:09:46 crc kubenswrapper[5117]: I0123 09:09:46.235835 5117 scope.go:117] "RemoveContainer" containerID="652ecc0b605bacfe2a16dd899d40ace6b8068602de3571538cddd08bf11b07a4" Jan 23 09:09:47 crc kubenswrapper[5117]: I0123 09:09:47.246471 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"867cd9670ed8afceb2b30c6ee2fd23303a86c3de53c70730de425c5a0b5ccede"} Jan 23 09:10:00 crc kubenswrapper[5117]: I0123 09:10:00.134754 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485990-zjdbs"] Jan 23 09:10:07 crc kubenswrapper[5117]: I0123 09:10:07.775991 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:07 crc kubenswrapper[5117]: I0123 09:10:07.781096 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:10:07 crc kubenswrapper[5117]: I0123 09:10:07.781172 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:10:07 crc kubenswrapper[5117]: I0123 09:10:07.784487 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485990-zjdbs"] Jan 23 09:10:07 crc kubenswrapper[5117]: I0123 09:10:07.784798 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:10:07 crc kubenswrapper[5117]: I0123 09:10:07.926935 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nqg5r\" (UniqueName: \"kubernetes.io/projected/7868dc80-3681-42b1-b23a-e117f348ff9a-kube-api-access-nqg5r\") pod \"auto-csr-approver-29485990-zjdbs\" (UID: \"7868dc80-3681-42b1-b23a-e117f348ff9a\") " pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:08 crc kubenswrapper[5117]: I0123 09:10:08.029032 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nqg5r\" (UniqueName: \"kubernetes.io/projected/7868dc80-3681-42b1-b23a-e117f348ff9a-kube-api-access-nqg5r\") pod \"auto-csr-approver-29485990-zjdbs\" (UID: \"7868dc80-3681-42b1-b23a-e117f348ff9a\") " pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:08 crc kubenswrapper[5117]: I0123 09:10:08.050920 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nqg5r\" (UniqueName: \"kubernetes.io/projected/7868dc80-3681-42b1-b23a-e117f348ff9a-kube-api-access-nqg5r\") pod \"auto-csr-approver-29485990-zjdbs\" (UID: \"7868dc80-3681-42b1-b23a-e117f348ff9a\") " pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:08 crc kubenswrapper[5117]: I0123 09:10:08.096946 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:08 crc kubenswrapper[5117]: I0123 09:10:08.507515 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485990-zjdbs"] Jan 23 09:10:09 crc kubenswrapper[5117]: I0123 09:10:09.396455 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" event={"ID":"7868dc80-3681-42b1-b23a-e117f348ff9a","Type":"ContainerStarted","Data":"82b22a1abbc3b33e8e9ffda5157b150b7624b1991ca5ad1bd8ae21850b527a55"} Jan 23 09:10:22 crc kubenswrapper[5117]: I0123 09:10:22.491976 5117 generic.go:358] "Generic (PLEG): container finished" podID="7868dc80-3681-42b1-b23a-e117f348ff9a" containerID="c8d18002d5e2829e6368b878e9961b9d5e42fe3af27ba1379fd080334ae46163" exitCode=0 Jan 23 09:10:22 crc kubenswrapper[5117]: I0123 09:10:22.492089 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" event={"ID":"7868dc80-3681-42b1-b23a-e117f348ff9a","Type":"ContainerDied","Data":"c8d18002d5e2829e6368b878e9961b9d5e42fe3af27ba1379fd080334ae46163"} Jan 23 09:10:23 crc kubenswrapper[5117]: I0123 09:10:23.719679 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:23 crc kubenswrapper[5117]: I0123 09:10:23.891687 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nqg5r\" (UniqueName: \"kubernetes.io/projected/7868dc80-3681-42b1-b23a-e117f348ff9a-kube-api-access-nqg5r\") pod \"7868dc80-3681-42b1-b23a-e117f348ff9a\" (UID: \"7868dc80-3681-42b1-b23a-e117f348ff9a\") " Jan 23 09:10:23 crc kubenswrapper[5117]: I0123 09:10:23.897935 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7868dc80-3681-42b1-b23a-e117f348ff9a-kube-api-access-nqg5r" (OuterVolumeSpecName: "kube-api-access-nqg5r") pod "7868dc80-3681-42b1-b23a-e117f348ff9a" (UID: "7868dc80-3681-42b1-b23a-e117f348ff9a"). InnerVolumeSpecName "kube-api-access-nqg5r". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:10:23 crc kubenswrapper[5117]: I0123 09:10:23.993169 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nqg5r\" (UniqueName: \"kubernetes.io/projected/7868dc80-3681-42b1-b23a-e117f348ff9a-kube-api-access-nqg5r\") on node \"crc\" DevicePath \"\"" Jan 23 09:10:24 crc kubenswrapper[5117]: I0123 09:10:24.505392 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" event={"ID":"7868dc80-3681-42b1-b23a-e117f348ff9a","Type":"ContainerDied","Data":"82b22a1abbc3b33e8e9ffda5157b150b7624b1991ca5ad1bd8ae21850b527a55"} Jan 23 09:10:24 crc kubenswrapper[5117]: I0123 09:10:24.505439 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="82b22a1abbc3b33e8e9ffda5157b150b7624b1991ca5ad1bd8ae21850b527a55" Jan 23 09:10:24 crc kubenswrapper[5117]: I0123 09:10:24.505445 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485990-zjdbs" Jan 23 09:10:24 crc kubenswrapper[5117]: I0123 09:10:24.792730 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485984-sm46f"] Jan 23 09:10:24 crc kubenswrapper[5117]: I0123 09:10:24.796491 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485984-sm46f"] Jan 23 09:10:26 crc kubenswrapper[5117]: I0123 09:10:26.779619 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="eb384748-18de-4780-81ca-0bb474334d96" path="/var/lib/kubelet/pods/eb384748-18de-4780-81ca-0bb474334d96/volumes" Jan 23 09:11:14 crc kubenswrapper[5117]: I0123 09:11:14.961003 5117 generic.go:358] "Generic (PLEG): container finished" podID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerID="08b15dbe7ce8bcd92de1c27a32b398bfef656153858c0d8fa3066263cec9d3f2" exitCode=0 Jan 23 09:11:14 crc kubenswrapper[5117]: I0123 09:11:14.961221 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerDied","Data":"08b15dbe7ce8bcd92de1c27a32b398bfef656153858c0d8fa3066263cec9d3f2"} Jan 23 09:11:15 crc kubenswrapper[5117]: I0123 09:11:15.192113 5117 scope.go:117] "RemoveContainer" containerID="e09028e6d51bf24dac649c6f231c3d3d5d0822c929cb256ad19188bb3f612cf8" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.210812 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330124 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-node-pullsecrets\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330241 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-proxy-ca-bundles\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330311 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-run\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330353 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-system-configs\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330390 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-pull\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330431 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildworkdir\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330455 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-ca-bundles\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330474 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-root\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330491 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-dmpls\" (UniqueName: \"kubernetes.io/projected/6c948705-ea0a-48e2-b9a3-74dbc4562c72-kube-api-access-dmpls\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330579 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-push\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330598 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-blob-cache\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330618 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildcachedir\") pod \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\" (UID: \"6c948705-ea0a-48e2-b9a3-74dbc4562c72\") " Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.330892 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.331326 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.331838 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.331876 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.332108 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.332958 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.339598 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.339622 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.339780 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c948705-ea0a-48e2-b9a3-74dbc4562c72-kube-api-access-dmpls" (OuterVolumeSpecName: "kube-api-access-dmpls") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "kube-api-access-dmpls". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.368250 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432063 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432092 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c948705-ea0a-48e2-b9a3-74dbc4562c72-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432102 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432112 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432120 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432140 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432150 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432158 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432166 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-dmpls\" (UniqueName: \"kubernetes.io/projected/6c948705-ea0a-48e2-b9a3-74dbc4562c72-kube-api-access-dmpls\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.432174 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c948705-ea0a-48e2-b9a3-74dbc4562c72-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.509840 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.533784 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.978197 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-2-build" event={"ID":"6c948705-ea0a-48e2-b9a3-74dbc4562c72","Type":"ContainerDied","Data":"b67e8a2ace26db89d862c84d6e30d01e42d44651dd8665e01ce13376fcc77de4"} Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.978288 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b67e8a2ace26db89d862c84d6e30d01e42d44651dd8665e01ce13376fcc77de4" Jan 23 09:11:16 crc kubenswrapper[5117]: I0123 09:11:16.978362 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-2-build" Jan 23 09:11:18 crc kubenswrapper[5117]: I0123 09:11:18.087492 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6c948705-ea0a-48e2-b9a3-74dbc4562c72" (UID: "6c948705-ea0a-48e2-b9a3-74dbc4562c72"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:18 crc kubenswrapper[5117]: I0123 09:11:18.156321 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c948705-ea0a-48e2-b9a3-74dbc4562c72-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.769902 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.770943 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7868dc80-3681-42b1-b23a-e117f348ff9a" containerName="oc" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.770961 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7868dc80-3681-42b1-b23a-e117f348ff9a" containerName="oc" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.770980 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="docker-build" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.770990 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="docker-build" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.771023 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="git-clone" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.771031 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="git-clone" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.771041 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="manage-dockerfile" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.771049 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="manage-dockerfile" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.771188 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c948705-ea0a-48e2-b9a3-74dbc4562c72" containerName="docker-build" Jan 23 09:11:20 crc kubenswrapper[5117]: I0123 09:11:20.771209 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7868dc80-3681-42b1-b23a-e117f348ff9a" containerName="oc" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.411507 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.411531 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.414364 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.414596 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-ca\"" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.416255 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-global-ca\"" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.416259 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-1-sys-config\"" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517236 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d7z4z\" (UniqueName: \"kubernetes.io/projected/34562fce-eb17-4c18-8a4f-5042f99011d7-kube-api-access-d7z4z\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517298 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517342 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517422 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517547 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517686 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517757 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517794 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517867 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.517884 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.518045 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.518077 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619320 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619422 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619469 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619526 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619550 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619604 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619647 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619702 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d7z4z\" (UniqueName: \"kubernetes.io/projected/34562fce-eb17-4c18-8a4f-5042f99011d7-kube-api-access-d7z4z\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619732 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619770 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619905 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619947 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-buildworkdir\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619966 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.620116 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-build-blob-cache\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.620170 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-buildcachedir\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.620241 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-run\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.620584 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-root\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.620801 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-proxy-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.621295 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-ca-bundles\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.619827 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-node-pullsecrets\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.623617 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-system-configs\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.626006 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.629907 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.645266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d7z4z\" (UniqueName: \"kubernetes.io/projected/34562fce-eb17-4c18-8a4f-5042f99011d7-kube-api-access-d7z4z\") pod \"smart-gateway-operator-1-build\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.733953 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:22 crc kubenswrapper[5117]: I0123 09:11:22.943056 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 09:11:23 crc kubenswrapper[5117]: I0123 09:11:23.015471 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"34562fce-eb17-4c18-8a4f-5042f99011d7","Type":"ContainerStarted","Data":"6307ba7a7e1f31c2cf08790dcd63a36a66f5ee1373574f5219c7ddf0299ae515"} Jan 23 09:11:25 crc kubenswrapper[5117]: I0123 09:11:25.029528 5117 generic.go:358] "Generic (PLEG): container finished" podID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerID="6e535deec680e6715cc277328a636b1c74f56f0cb94757eb51518bdc299d3769" exitCode=0 Jan 23 09:11:25 crc kubenswrapper[5117]: I0123 09:11:25.029579 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"34562fce-eb17-4c18-8a4f-5042f99011d7","Type":"ContainerDied","Data":"6e535deec680e6715cc277328a636b1c74f56f0cb94757eb51518bdc299d3769"} Jan 23 09:11:26 crc kubenswrapper[5117]: I0123 09:11:26.038239 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"34562fce-eb17-4c18-8a4f-5042f99011d7","Type":"ContainerStarted","Data":"25402f1041506f48a02df805cbd4687b27ab57a9da5daed759637368f281bbad"} Jan 23 09:11:26 crc kubenswrapper[5117]: I0123 09:11:26.060205 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-1-build" podStartSLOduration=6.060187417 podStartE2EDuration="6.060187417s" podCreationTimestamp="2026-01-23 09:11:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:26.056362737 +0000 UTC m=+1097.812487783" watchObservedRunningTime="2026-01-23 09:11:26.060187417 +0000 UTC m=+1097.816312433" Jan 23 09:11:31 crc kubenswrapper[5117]: I0123 09:11:31.686857 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 09:11:31 crc kubenswrapper[5117]: I0123 09:11:31.687895 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/smart-gateway-operator-1-build" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerName="docker-build" containerID="cri-o://25402f1041506f48a02df805cbd4687b27ab57a9da5daed759637368f281bbad" gracePeriod=30 Jan 23 09:11:33 crc kubenswrapper[5117]: I0123 09:11:33.352741 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 23 09:11:36 crc kubenswrapper[5117]: I0123 09:11:36.989346 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:36 crc kubenswrapper[5117]: I0123 09:11:36.992615 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-global-ca\"" Jan 23 09:11:36 crc kubenswrapper[5117]: I0123 09:11:36.992685 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-sys-config\"" Jan 23 09:11:36 crc kubenswrapper[5117]: I0123 09:11:36.993273 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-2-ca\"" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.006100 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030294 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030340 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030487 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030516 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030636 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030690 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030792 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030807 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030823 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lpgcc\" (UniqueName: \"kubernetes.io/projected/6c90d837-3320-4ed1-b154-70117a77c1e8-kube-api-access-lpgcc\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030891 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030918 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.030943 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132057 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132122 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132234 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lpgcc\" (UniqueName: \"kubernetes.io/projected/6c90d837-3320-4ed1-b154-70117a77c1e8-kube-api-access-lpgcc\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132268 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132292 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132321 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132420 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-node-pullsecrets\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132499 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-buildcachedir\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132538 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132691 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132817 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132931 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.132998 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-root\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.133059 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-run\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.133090 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-buildworkdir\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.133328 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.133373 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.133496 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-build-blob-cache\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.133561 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.134163 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-proxy-ca-bundles\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.134833 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-system-configs\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.140245 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.140247 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.152045 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lpgcc\" (UniqueName: \"kubernetes.io/projected/6c90d837-3320-4ed1-b154-70117a77c1e8-kube-api-access-lpgcc\") pod \"smart-gateway-operator-2-build\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.319292 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.523691 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-2-build"] Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.954192 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_34562fce-eb17-4c18-8a4f-5042f99011d7/docker-build/0.log" Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.954791 5117 generic.go:358] "Generic (PLEG): container finished" podID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerID="25402f1041506f48a02df805cbd4687b27ab57a9da5daed759637368f281bbad" exitCode=1 Jan 23 09:11:37 crc kubenswrapper[5117]: I0123 09:11:37.954918 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"34562fce-eb17-4c18-8a4f-5042f99011d7","Type":"ContainerDied","Data":"25402f1041506f48a02df805cbd4687b27ab57a9da5daed759637368f281bbad"} Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.764850 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_34562fce-eb17-4c18-8a4f-5042f99011d7/docker-build/0.log" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.765668 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857315 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-root\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857378 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-proxy-ca-bundles\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857478 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-pull\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857515 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-node-pullsecrets\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857579 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-buildworkdir\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857609 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-buildcachedir\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857684 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-system-configs\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857741 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-build-blob-cache\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857774 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-ca-bundles\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857983 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-d7z4z\" (UniqueName: \"kubernetes.io/projected/34562fce-eb17-4c18-8a4f-5042f99011d7-kube-api-access-d7z4z\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.857993 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.858027 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-push\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.858049 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.858067 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-run\") pod \"34562fce-eb17-4c18-8a4f-5042f99011d7\" (UID: \"34562fce-eb17-4c18-8a4f-5042f99011d7\") " Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.859113 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.859175 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/34562fce-eb17-4c18-8a4f-5042f99011d7-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.859457 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.859970 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.860218 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.860775 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.861027 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.861299 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.864668 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.864965 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.865337 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/34562fce-eb17-4c18-8a4f-5042f99011d7-kube-api-access-d7z4z" (OuterVolumeSpecName: "kube-api-access-d7z4z") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "kube-api-access-d7z4z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.907788 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "34562fce-eb17-4c18-8a4f-5042f99011d7" (UID: "34562fce-eb17-4c18-8a4f-5042f99011d7"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960262 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960303 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960316 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960331 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960346 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960357 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960368 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/34562fce-eb17-4c18-8a4f-5042f99011d7-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960379 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-d7z4z\" (UniqueName: \"kubernetes.io/projected/34562fce-eb17-4c18-8a4f-5042f99011d7-kube-api-access-d7z4z\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960389 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/34562fce-eb17-4c18-8a4f-5042f99011d7-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.960402 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/34562fce-eb17-4c18-8a4f-5042f99011d7-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.969560 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-1-build_34562fce-eb17-4c18-8a4f-5042f99011d7/docker-build/0.log" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.970098 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-1-build" event={"ID":"34562fce-eb17-4c18-8a4f-5042f99011d7","Type":"ContainerDied","Data":"6307ba7a7e1f31c2cf08790dcd63a36a66f5ee1373574f5219c7ddf0299ae515"} Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.970171 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-1-build" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.970216 5117 scope.go:117] "RemoveContainer" containerID="25402f1041506f48a02df805cbd4687b27ab57a9da5daed759637368f281bbad" Jan 23 09:11:38 crc kubenswrapper[5117]: I0123 09:11:38.974552 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerStarted","Data":"71f997e3466a2382e783a7981c811818a5ca53684142452ab78cdc231475bd13"} Jan 23 09:11:39 crc kubenswrapper[5117]: I0123 09:11:39.013464 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 09:11:39 crc kubenswrapper[5117]: I0123 09:11:39.017253 5117 scope.go:117] "RemoveContainer" containerID="6e535deec680e6715cc277328a636b1c74f56f0cb94757eb51518bdc299d3769" Jan 23 09:11:39 crc kubenswrapper[5117]: I0123 09:11:39.017956 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-1-build"] Jan 23 09:11:39 crc kubenswrapper[5117]: I0123 09:11:39.982610 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerStarted","Data":"866360fae5b191833486683310cc5fd9cb4313691bad419cdc663b7a980a85f6"} Jan 23 09:11:40 crc kubenswrapper[5117]: I0123 09:11:40.779197 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" path="/var/lib/kubelet/pods/34562fce-eb17-4c18-8a4f-5042f99011d7/volumes" Jan 23 09:11:40 crc kubenswrapper[5117]: I0123 09:11:40.992022 5117 generic.go:358] "Generic (PLEG): container finished" podID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerID="866360fae5b191833486683310cc5fd9cb4313691bad419cdc663b7a980a85f6" exitCode=0 Jan 23 09:11:40 crc kubenswrapper[5117]: I0123 09:11:40.992119 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerDied","Data":"866360fae5b191833486683310cc5fd9cb4313691bad419cdc663b7a980a85f6"} Jan 23 09:11:43 crc kubenswrapper[5117]: I0123 09:11:43.007553 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerStarted","Data":"1a9bab1e0f73efcae70320e70296ee055ce78bbb6c6f90d48c88c8517c9a01c2"} Jan 23 09:11:44 crc kubenswrapper[5117]: I0123 09:11:44.014807 5117 generic.go:358] "Generic (PLEG): container finished" podID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerID="1a9bab1e0f73efcae70320e70296ee055ce78bbb6c6f90d48c88c8517c9a01c2" exitCode=0 Jan 23 09:11:44 crc kubenswrapper[5117]: I0123 09:11:44.014922 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerDied","Data":"1a9bab1e0f73efcae70320e70296ee055ce78bbb6c6f90d48c88c8517c9a01c2"} Jan 23 09:11:44 crc kubenswrapper[5117]: I0123 09:11:44.059756 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-2-build_6c90d837-3320-4ed1-b154-70117a77c1e8/manage-dockerfile/0.log" Jan 23 09:11:45 crc kubenswrapper[5117]: I0123 09:11:45.026736 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerStarted","Data":"9bb3fed73ee3bede3caca81c1a49def2406aeebb975c08e944c91d53b66e326e"} Jan 23 09:11:45 crc kubenswrapper[5117]: I0123 09:11:45.056071 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-2-build" podStartSLOduration=12.056053386 podStartE2EDuration="12.056053386s" podCreationTimestamp="2026-01-23 09:11:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:11:45.04852557 +0000 UTC m=+1116.804650596" watchObservedRunningTime="2026-01-23 09:11:45.056053386 +0000 UTC m=+1116.812178412" Jan 23 09:12:00 crc kubenswrapper[5117]: I0123 09:12:00.174292 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485992-lhs64"] Jan 23 09:12:00 crc kubenswrapper[5117]: I0123 09:12:00.175615 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerName="docker-build" Jan 23 09:12:00 crc kubenswrapper[5117]: I0123 09:12:00.175633 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerName="docker-build" Jan 23 09:12:00 crc kubenswrapper[5117]: I0123 09:12:00.175669 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerName="manage-dockerfile" Jan 23 09:12:00 crc kubenswrapper[5117]: I0123 09:12:00.175677 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerName="manage-dockerfile" Jan 23 09:12:00 crc kubenswrapper[5117]: I0123 09:12:00.175833 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="34562fce-eb17-4c18-8a4f-5042f99011d7" containerName="docker-build" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.410072 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485992-lhs64"] Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.410323 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.412756 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.412806 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.413471 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.505933 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-r6mtv\" (UniqueName: \"kubernetes.io/projected/28276b89-2814-41a6-847d-9048fefdb222-kube-api-access-r6mtv\") pod \"auto-csr-approver-29485992-lhs64\" (UID: \"28276b89-2814-41a6-847d-9048fefdb222\") " pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.607613 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-r6mtv\" (UniqueName: \"kubernetes.io/projected/28276b89-2814-41a6-847d-9048fefdb222-kube-api-access-r6mtv\") pod \"auto-csr-approver-29485992-lhs64\" (UID: \"28276b89-2814-41a6-847d-9048fefdb222\") " pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.637833 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-r6mtv\" (UniqueName: \"kubernetes.io/projected/28276b89-2814-41a6-847d-9048fefdb222-kube-api-access-r6mtv\") pod \"auto-csr-approver-29485992-lhs64\" (UID: \"28276b89-2814-41a6-847d-9048fefdb222\") " pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.739036 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:01 crc kubenswrapper[5117]: I0123 09:12:01.959084 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485992-lhs64"] Jan 23 09:12:01 crc kubenswrapper[5117]: W0123 09:12:01.965186 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod28276b89_2814_41a6_847d_9048fefdb222.slice/crio-24ce4c8f3533c61285cb20459313e7555c358355525f3c4302f9a6b6cfbc7c2f WatchSource:0}: Error finding container 24ce4c8f3533c61285cb20459313e7555c358355525f3c4302f9a6b6cfbc7c2f: Status 404 returned error can't find the container with id 24ce4c8f3533c61285cb20459313e7555c358355525f3c4302f9a6b6cfbc7c2f Jan 23 09:12:02 crc kubenswrapper[5117]: I0123 09:12:02.142223 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485992-lhs64" event={"ID":"28276b89-2814-41a6-847d-9048fefdb222","Type":"ContainerStarted","Data":"24ce4c8f3533c61285cb20459313e7555c358355525f3c4302f9a6b6cfbc7c2f"} Jan 23 09:12:05 crc kubenswrapper[5117]: I0123 09:12:05.164856 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485992-lhs64" event={"ID":"28276b89-2814-41a6-847d-9048fefdb222","Type":"ContainerStarted","Data":"17c84c758fe0aeaacc237c9f0189ec30c1205e5d6ce6fd782eaf906f48ccd386"} Jan 23 09:12:06 crc kubenswrapper[5117]: I0123 09:12:06.172842 5117 generic.go:358] "Generic (PLEG): container finished" podID="28276b89-2814-41a6-847d-9048fefdb222" containerID="17c84c758fe0aeaacc237c9f0189ec30c1205e5d6ce6fd782eaf906f48ccd386" exitCode=0 Jan 23 09:12:06 crc kubenswrapper[5117]: I0123 09:12:06.172909 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485992-lhs64" event={"ID":"28276b89-2814-41a6-847d-9048fefdb222","Type":"ContainerDied","Data":"17c84c758fe0aeaacc237c9f0189ec30c1205e5d6ce6fd782eaf906f48ccd386"} Jan 23 09:12:07 crc kubenswrapper[5117]: I0123 09:12:07.834514 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:07 crc kubenswrapper[5117]: I0123 09:12:07.890031 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-r6mtv\" (UniqueName: \"kubernetes.io/projected/28276b89-2814-41a6-847d-9048fefdb222-kube-api-access-r6mtv\") pod \"28276b89-2814-41a6-847d-9048fefdb222\" (UID: \"28276b89-2814-41a6-847d-9048fefdb222\") " Jan 23 09:12:07 crc kubenswrapper[5117]: I0123 09:12:07.896841 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/28276b89-2814-41a6-847d-9048fefdb222-kube-api-access-r6mtv" (OuterVolumeSpecName: "kube-api-access-r6mtv") pod "28276b89-2814-41a6-847d-9048fefdb222" (UID: "28276b89-2814-41a6-847d-9048fefdb222"). InnerVolumeSpecName "kube-api-access-r6mtv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:12:07 crc kubenswrapper[5117]: I0123 09:12:07.992059 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-r6mtv\" (UniqueName: \"kubernetes.io/projected/28276b89-2814-41a6-847d-9048fefdb222-kube-api-access-r6mtv\") on node \"crc\" DevicePath \"\"" Jan 23 09:12:08 crc kubenswrapper[5117]: I0123 09:12:08.187175 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485992-lhs64" Jan 23 09:12:08 crc kubenswrapper[5117]: I0123 09:12:08.187200 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485992-lhs64" event={"ID":"28276b89-2814-41a6-847d-9048fefdb222","Type":"ContainerDied","Data":"24ce4c8f3533c61285cb20459313e7555c358355525f3c4302f9a6b6cfbc7c2f"} Jan 23 09:12:08 crc kubenswrapper[5117]: I0123 09:12:08.187238 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="24ce4c8f3533c61285cb20459313e7555c358355525f3c4302f9a6b6cfbc7c2f" Jan 23 09:12:08 crc kubenswrapper[5117]: I0123 09:12:08.905147 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485986-g7fxs"] Jan 23 09:12:08 crc kubenswrapper[5117]: I0123 09:12:08.913000 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485986-g7fxs"] Jan 23 09:12:10 crc kubenswrapper[5117]: I0123 09:12:10.781354 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="27b2bd0c-1d1b-43cb-8111-fadf43f06788" path="/var/lib/kubelet/pods/27b2bd0c-1d1b-43cb-8111-fadf43f06788/volumes" Jan 23 09:12:15 crc kubenswrapper[5117]: I0123 09:12:15.062999 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:12:15 crc kubenswrapper[5117]: I0123 09:12:15.063336 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:12:15 crc kubenswrapper[5117]: I0123 09:12:15.283833 5117 scope.go:117] "RemoveContainer" containerID="7b5024136536b820c7e70cbd9b1d9916000f3cc972bfa615fd602a422cd98e16" Jan 23 09:12:45 crc kubenswrapper[5117]: I0123 09:12:45.062911 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:12:45 crc kubenswrapper[5117]: I0123 09:12:45.063565 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:13:14 crc kubenswrapper[5117]: I0123 09:13:14.643809 5117 generic.go:358] "Generic (PLEG): container finished" podID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerID="9bb3fed73ee3bede3caca81c1a49def2406aeebb975c08e944c91d53b66e326e" exitCode=0 Jan 23 09:13:14 crc kubenswrapper[5117]: I0123 09:13:14.643878 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerDied","Data":"9bb3fed73ee3bede3caca81c1a49def2406aeebb975c08e944c91d53b66e326e"} Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.062876 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.062948 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.063004 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.063610 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"867cd9670ed8afceb2b30c6ee2fd23303a86c3de53c70730de425c5a0b5ccede"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.063678 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://867cd9670ed8afceb2b30c6ee2fd23303a86c3de53c70730de425c5a0b5ccede" gracePeriod=600 Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.346064 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.348795 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.351449 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.353787 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:13:15 crc kubenswrapper[5117]: I0123 09:13:15.938072 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.034760 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-run\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.034841 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-root\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.034871 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-proxy-ca-bundles\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.034936 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-build-blob-cache\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035031 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-buildworkdir\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035229 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-node-pullsecrets\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035323 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-push\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035359 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lpgcc\" (UniqueName: \"kubernetes.io/projected/6c90d837-3320-4ed1-b154-70117a77c1e8-kube-api-access-lpgcc\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035380 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-buildcachedir\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035559 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-pull\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035632 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-system-configs\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035667 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-ca-bundles\") pod \"6c90d837-3320-4ed1-b154-70117a77c1e8\" (UID: \"6c90d837-3320-4ed1-b154-70117a77c1e8\") " Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035367 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035850 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.035912 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.036646 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.036686 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.036697 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6c90d837-3320-4ed1-b154-70117a77c1e8-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.036913 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.037373 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.037560 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.040003 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.041546 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.041610 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.041615 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6c90d837-3320-4ed1-b154-70117a77c1e8-kube-api-access-lpgcc" (OuterVolumeSpecName: "kube-api-access-lpgcc") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "kube-api-access-lpgcc". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138309 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138348 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138359 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lpgcc\" (UniqueName: \"kubernetes.io/projected/6c90d837-3320-4ed1-b154-70117a77c1e8-kube-api-access-lpgcc\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138368 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6c90d837-3320-4ed1-b154-70117a77c1e8-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138382 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138391 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.138401 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6c90d837-3320-4ed1-b154-70117a77c1e8-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.253327 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.342666 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.662191 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-2-build" event={"ID":"6c90d837-3320-4ed1-b154-70117a77c1e8","Type":"ContainerDied","Data":"71f997e3466a2382e783a7981c811818a5ca53684142452ab78cdc231475bd13"} Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.662250 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="71f997e3466a2382e783a7981c811818a5ca53684142452ab78cdc231475bd13" Jan 23 09:13:16 crc kubenswrapper[5117]: I0123 09:13:16.662202 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-2-build" Jan 23 09:13:17 crc kubenswrapper[5117]: I0123 09:13:17.820468 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6c90d837-3320-4ed1-b154-70117a77c1e8" (UID: "6c90d837-3320-4ed1-b154-70117a77c1e8"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:17 crc kubenswrapper[5117]: I0123 09:13:17.869436 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6c90d837-3320-4ed1-b154-70117a77c1e8-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:18 crc kubenswrapper[5117]: I0123 09:13:18.676660 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="867cd9670ed8afceb2b30c6ee2fd23303a86c3de53c70730de425c5a0b5ccede" exitCode=0 Jan 23 09:13:18 crc kubenswrapper[5117]: I0123 09:13:18.676806 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"867cd9670ed8afceb2b30c6ee2fd23303a86c3de53c70730de425c5a0b5ccede"} Jan 23 09:13:18 crc kubenswrapper[5117]: I0123 09:13:18.676839 5117 scope.go:117] "RemoveContainer" containerID="cca1814ea22f487f6803da65dbe0a07d6e9a455a9d99b67f2cffa31f9de502dd" Jan 23 09:13:19 crc kubenswrapper[5117]: I0123 09:13:19.685875 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"26aa588973e2b79613df878d11883a627be7e28f6aad990afd10a1ff422b5018"} Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.988080 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989342 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="manage-dockerfile" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989364 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="manage-dockerfile" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989387 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="28276b89-2814-41a6-847d-9048fefdb222" containerName="oc" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989406 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="28276b89-2814-41a6-847d-9048fefdb222" containerName="oc" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989431 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="docker-build" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989445 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="docker-build" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989466 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="git-clone" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989472 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="git-clone" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989574 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="28276b89-2814-41a6-847d-9048fefdb222" containerName="oc" Jan 23 09:13:20 crc kubenswrapper[5117]: I0123 09:13:20.989591 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="6c90d837-3320-4ed1-b154-70117a77c1e8" containerName="docker-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.008805 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.008957 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.012004 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-sys-config\"" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.012779 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-global-ca\"" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.013917 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-1-ca\"" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.016484 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.112832 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.112900 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildcachedir\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.112942 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildworkdir\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.112968 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-push\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.112996 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-system-configs\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113029 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-pull\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113143 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-root\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113392 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-w8bh7\" (UniqueName: \"kubernetes.io/projected/86daea63-6205-4d96-aff4-2fb2aa127ed3-kube-api-access-w8bh7\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113485 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113513 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113540 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.113578 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-run\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215255 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215328 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildcachedir\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215355 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildworkdir\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215371 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-push\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215401 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-system-configs\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215443 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-pull\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215466 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-root\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215497 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-w8bh7\" (UniqueName: \"kubernetes.io/projected/86daea63-6205-4d96-aff4-2fb2aa127ed3-kube-api-access-w8bh7\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215521 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215547 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215575 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.215606 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-run\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.216110 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-run\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.216895 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-root\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.216972 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-blob-cache\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.217011 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-ca-bundles\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.217070 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildcachedir\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.217230 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-node-pullsecrets\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.217309 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildworkdir\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.217797 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-proxy-ca-bundles\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.217819 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-system-configs\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.224121 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-push\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.224227 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-pull\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.238850 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-w8bh7\" (UniqueName: \"kubernetes.io/projected/86daea63-6205-4d96-aff4-2fb2aa127ed3-kube-api-access-w8bh7\") pod \"sg-core-1-build\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.325781 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.546597 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 09:13:21 crc kubenswrapper[5117]: W0123 09:13:21.551431 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod86daea63_6205_4d96_aff4_2fb2aa127ed3.slice/crio-f74b377267864309aaa4124461d207614cd0b7eb2fe0b29f74fa5c7764cc485b WatchSource:0}: Error finding container f74b377267864309aaa4124461d207614cd0b7eb2fe0b29f74fa5c7764cc485b: Status 404 returned error can't find the container with id f74b377267864309aaa4124461d207614cd0b7eb2fe0b29f74fa5c7764cc485b Jan 23 09:13:21 crc kubenswrapper[5117]: I0123 09:13:21.700738 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"86daea63-6205-4d96-aff4-2fb2aa127ed3","Type":"ContainerStarted","Data":"f74b377267864309aaa4124461d207614cd0b7eb2fe0b29f74fa5c7764cc485b"} Jan 23 09:13:22 crc kubenswrapper[5117]: I0123 09:13:22.707944 5117 generic.go:358] "Generic (PLEG): container finished" podID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerID="137d2b91f26512fe2f7166f193c916ac36de52b8dc8619841ef96ce40bb9493c" exitCode=0 Jan 23 09:13:22 crc kubenswrapper[5117]: I0123 09:13:22.708039 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"86daea63-6205-4d96-aff4-2fb2aa127ed3","Type":"ContainerDied","Data":"137d2b91f26512fe2f7166f193c916ac36de52b8dc8619841ef96ce40bb9493c"} Jan 23 09:13:23 crc kubenswrapper[5117]: I0123 09:13:23.717500 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"86daea63-6205-4d96-aff4-2fb2aa127ed3","Type":"ContainerStarted","Data":"1774bcfa102cdb5fb7084b3759dbe13d2a46b1ef3581b0a5036589ad485c7897"} Jan 23 09:13:23 crc kubenswrapper[5117]: I0123 09:13:23.743689 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-1-build" podStartSLOduration=3.743672888 podStartE2EDuration="3.743672888s" podCreationTimestamp="2026-01-23 09:13:20 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:13:23.743181574 +0000 UTC m=+1215.499306620" watchObservedRunningTime="2026-01-23 09:13:23.743672888 +0000 UTC m=+1215.499797915" Jan 23 09:13:27 crc kubenswrapper[5117]: E0123 09:13:27.853511 5117 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="1.083s" Jan 23 09:13:31 crc kubenswrapper[5117]: I0123 09:13:31.400378 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 09:13:31 crc kubenswrapper[5117]: I0123 09:13:31.401154 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-core-1-build" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerName="docker-build" containerID="cri-o://1774bcfa102cdb5fb7084b3759dbe13d2a46b1ef3581b0a5036589ad485c7897" gracePeriod=30 Jan 23 09:13:31 crc kubenswrapper[5117]: I0123 09:13:31.770720 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_86daea63-6205-4d96-aff4-2fb2aa127ed3/docker-build/0.log" Jan 23 09:13:31 crc kubenswrapper[5117]: I0123 09:13:31.771374 5117 generic.go:358] "Generic (PLEG): container finished" podID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerID="1774bcfa102cdb5fb7084b3759dbe13d2a46b1ef3581b0a5036589ad485c7897" exitCode=1 Jan 23 09:13:31 crc kubenswrapper[5117]: I0123 09:13:31.771453 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"86daea63-6205-4d96-aff4-2fb2aa127ed3","Type":"ContainerDied","Data":"1774bcfa102cdb5fb7084b3759dbe13d2a46b1ef3581b0a5036589ad485c7897"} Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.389428 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_86daea63-6205-4d96-aff4-2fb2aa127ed3/docker-build/0.log" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.389888 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.469159 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildcachedir\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.469210 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-push\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.469231 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-system-configs\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.469255 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-node-pullsecrets\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.469287 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-pull\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470261 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470400 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470554 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-proxy-ca-bundles\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470634 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-ca-bundles\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470664 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildworkdir\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470699 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-run\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470724 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-blob-cache\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470754 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-w8bh7\" (UniqueName: \"kubernetes.io/projected/86daea63-6205-4d96-aff4-2fb2aa127ed3-kube-api-access-w8bh7\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.470793 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-root\") pod \"86daea63-6205-4d96-aff4-2fb2aa127ed3\" (UID: \"86daea63-6205-4d96-aff4-2fb2aa127ed3\") " Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471169 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471297 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471529 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471558 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471570 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471583 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/86daea63-6205-4d96-aff4-2fb2aa127ed3-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471597 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.471854 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.475066 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.475303 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/86daea63-6205-4d96-aff4-2fb2aa127ed3-kube-api-access-w8bh7" (OuterVolumeSpecName: "kube-api-access-w8bh7") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "kube-api-access-w8bh7". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.475333 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.475300 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.513283 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.573222 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.573523 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/86daea63-6205-4d96-aff4-2fb2aa127ed3-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.573610 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.573688 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.573769 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.573849 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.574310 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-w8bh7\" (UniqueName: \"kubernetes.io/projected/86daea63-6205-4d96-aff4-2fb2aa127ed3-kube-api-access-w8bh7\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.581117 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "86daea63-6205-4d96-aff4-2fb2aa127ed3" (UID: "86daea63-6205-4d96-aff4-2fb2aa127ed3"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.676485 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/86daea63-6205-4d96-aff4-2fb2aa127ed3-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.779821 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-1-build_86daea63-6205-4d96-aff4-2fb2aa127ed3/docker-build/0.log" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.780240 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-1-build" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.780264 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-1-build" event={"ID":"86daea63-6205-4d96-aff4-2fb2aa127ed3","Type":"ContainerDied","Data":"f74b377267864309aaa4124461d207614cd0b7eb2fe0b29f74fa5c7764cc485b"} Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.780328 5117 scope.go:117] "RemoveContainer" containerID="1774bcfa102cdb5fb7084b3759dbe13d2a46b1ef3581b0a5036589ad485c7897" Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.820381 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.822085 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-core-1-build"] Jan 23 09:13:32 crc kubenswrapper[5117]: I0123 09:13:32.822817 5117 scope.go:117] "RemoveContainer" containerID="137d2b91f26512fe2f7166f193c916ac36de52b8dc8619841ef96ce40bb9493c" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.041155 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.041891 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerName="manage-dockerfile" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.041913 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerName="manage-dockerfile" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.041931 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerName="docker-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.041940 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerName="docker-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.042100 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" containerName="docker-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.306787 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.307018 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.312565 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-sys-config\"" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.312639 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-ca\"" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.312580 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.312917 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-core-2-global-ca\"" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387370 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-run\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387432 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-system-configs\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387482 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-push\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387509 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387552 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-root\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387580 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildcachedir\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387690 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zgs75\" (UniqueName: \"kubernetes.io/projected/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-kube-api-access-zgs75\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387757 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.387918 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildworkdir\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.388076 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.388319 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-pull\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.388390 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489737 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-root\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489794 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildcachedir\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489817 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zgs75\" (UniqueName: \"kubernetes.io/projected/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-kube-api-access-zgs75\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489901 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildworkdir\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489921 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489958 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-pull\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.489980 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490028 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-run\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490048 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-system-configs\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490079 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-push\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490106 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490319 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-root\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490932 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-proxy-ca-bundles\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.490933 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-system-configs\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.491034 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-node-pullsecrets\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.491028 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-blob-cache\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.491193 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildworkdir\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.491536 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildcachedir\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.491906 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-run\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.492059 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-ca-bundles\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.496253 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-push\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.499858 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-pull\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.514851 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zgs75\" (UniqueName: \"kubernetes.io/projected/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-kube-api-access-zgs75\") pod \"sg-core-2-build\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.632820 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 09:13:33 crc kubenswrapper[5117]: I0123 09:13:33.825172 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-core-2-build"] Jan 23 09:13:34 crc kubenswrapper[5117]: I0123 09:13:34.782324 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="86daea63-6205-4d96-aff4-2fb2aa127ed3" path="/var/lib/kubelet/pods/86daea63-6205-4d96-aff4-2fb2aa127ed3/volumes" Jan 23 09:13:34 crc kubenswrapper[5117]: I0123 09:13:34.796235 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerStarted","Data":"2bb786d309a548045798cb328a6976ec9ec72edaed4edd466835a8bf20f1a9bc"} Jan 23 09:13:34 crc kubenswrapper[5117]: I0123 09:13:34.796289 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerStarted","Data":"81b044ec48769ecda9e14fe517f118cfdf9ab924c4e214190f636cdf3a472d5a"} Jan 23 09:13:35 crc kubenswrapper[5117]: I0123 09:13:35.805652 5117 generic.go:358] "Generic (PLEG): container finished" podID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerID="2bb786d309a548045798cb328a6976ec9ec72edaed4edd466835a8bf20f1a9bc" exitCode=0 Jan 23 09:13:35 crc kubenswrapper[5117]: I0123 09:13:35.805791 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerDied","Data":"2bb786d309a548045798cb328a6976ec9ec72edaed4edd466835a8bf20f1a9bc"} Jan 23 09:13:36 crc kubenswrapper[5117]: I0123 09:13:36.822716 5117 generic.go:358] "Generic (PLEG): container finished" podID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerID="9f552a33d284db9f56de3149d59ec52bc6ea5945bf6b5b5c7a89f51917e06f9e" exitCode=0 Jan 23 09:13:36 crc kubenswrapper[5117]: I0123 09:13:36.822793 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerDied","Data":"9f552a33d284db9f56de3149d59ec52bc6ea5945bf6b5b5c7a89f51917e06f9e"} Jan 23 09:13:36 crc kubenswrapper[5117]: I0123 09:13:36.856580 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-core-2-build_7553de07-e4b6-45e2-a0fc-3a48aabd4cbe/manage-dockerfile/0.log" Jan 23 09:13:38 crc kubenswrapper[5117]: I0123 09:13:38.844897 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerStarted","Data":"e2ad79f23ec79bd0ec92e29a9dae85b180d7b055742b2ad20b0eeafd39c63940"} Jan 23 09:13:38 crc kubenswrapper[5117]: I0123 09:13:38.868321 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-core-2-build" podStartSLOduration=5.868302291 podStartE2EDuration="5.868302291s" podCreationTimestamp="2026-01-23 09:13:33 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:13:38.866356926 +0000 UTC m=+1230.622481952" watchObservedRunningTime="2026-01-23 09:13:38.868302291 +0000 UTC m=+1230.624427317" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.133640 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485994-msdx6"] Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.504019 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485994-msdx6"] Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.504248 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.507557 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.507928 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.510453 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.582965 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-wsvhd\" (UniqueName: \"kubernetes.io/projected/3e459078-f382-4bad-bad1-1d440234381c-kube-api-access-wsvhd\") pod \"auto-csr-approver-29485994-msdx6\" (UID: \"3e459078-f382-4bad-bad1-1d440234381c\") " pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.683740 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-wsvhd\" (UniqueName: \"kubernetes.io/projected/3e459078-f382-4bad-bad1-1d440234381c-kube-api-access-wsvhd\") pod \"auto-csr-approver-29485994-msdx6\" (UID: \"3e459078-f382-4bad-bad1-1d440234381c\") " pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.705271 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-wsvhd\" (UniqueName: \"kubernetes.io/projected/3e459078-f382-4bad-bad1-1d440234381c-kube-api-access-wsvhd\") pod \"auto-csr-approver-29485994-msdx6\" (UID: \"3e459078-f382-4bad-bad1-1d440234381c\") " pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:00 crc kubenswrapper[5117]: I0123 09:14:00.822336 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:01 crc kubenswrapper[5117]: I0123 09:14:01.008440 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485994-msdx6"] Jan 23 09:14:02 crc kubenswrapper[5117]: I0123 09:14:02.003300 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485994-msdx6" event={"ID":"3e459078-f382-4bad-bad1-1d440234381c","Type":"ContainerStarted","Data":"1e966233815226115fb499665d75ecefdcc98f97044be43a71ae241782ceafdd"} Jan 23 09:14:05 crc kubenswrapper[5117]: I0123 09:14:05.023318 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485994-msdx6" event={"ID":"3e459078-f382-4bad-bad1-1d440234381c","Type":"ContainerStarted","Data":"d2c12d063354522630002ec95f20124ecc7b86028e85b2513a57a58c26361b0b"} Jan 23 09:14:05 crc kubenswrapper[5117]: I0123 09:14:05.036044 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29485994-msdx6" podStartSLOduration=1.6489606079999999 podStartE2EDuration="5.036025978s" podCreationTimestamp="2026-01-23 09:14:00 +0000 UTC" firstStartedPulling="2026-01-23 09:14:01.012931209 +0000 UTC m=+1252.769056235" lastFinishedPulling="2026-01-23 09:14:04.399996579 +0000 UTC m=+1256.156121605" observedRunningTime="2026-01-23 09:14:05.035594015 +0000 UTC m=+1256.791719041" watchObservedRunningTime="2026-01-23 09:14:05.036025978 +0000 UTC m=+1256.792151004" Jan 23 09:14:06 crc kubenswrapper[5117]: I0123 09:14:06.034760 5117 generic.go:358] "Generic (PLEG): container finished" podID="3e459078-f382-4bad-bad1-1d440234381c" containerID="d2c12d063354522630002ec95f20124ecc7b86028e85b2513a57a58c26361b0b" exitCode=0 Jan 23 09:14:06 crc kubenswrapper[5117]: I0123 09:14:06.034919 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485994-msdx6" event={"ID":"3e459078-f382-4bad-bad1-1d440234381c","Type":"ContainerDied","Data":"d2c12d063354522630002ec95f20124ecc7b86028e85b2513a57a58c26361b0b"} Jan 23 09:14:07 crc kubenswrapper[5117]: I0123 09:14:07.255462 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:07 crc kubenswrapper[5117]: I0123 09:14:07.378273 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-wsvhd\" (UniqueName: \"kubernetes.io/projected/3e459078-f382-4bad-bad1-1d440234381c-kube-api-access-wsvhd\") pod \"3e459078-f382-4bad-bad1-1d440234381c\" (UID: \"3e459078-f382-4bad-bad1-1d440234381c\") " Jan 23 09:14:07 crc kubenswrapper[5117]: I0123 09:14:07.387736 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3e459078-f382-4bad-bad1-1d440234381c-kube-api-access-wsvhd" (OuterVolumeSpecName: "kube-api-access-wsvhd") pod "3e459078-f382-4bad-bad1-1d440234381c" (UID: "3e459078-f382-4bad-bad1-1d440234381c"). InnerVolumeSpecName "kube-api-access-wsvhd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:14:07 crc kubenswrapper[5117]: I0123 09:14:07.480281 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-wsvhd\" (UniqueName: \"kubernetes.io/projected/3e459078-f382-4bad-bad1-1d440234381c-kube-api-access-wsvhd\") on node \"crc\" DevicePath \"\"" Jan 23 09:14:08 crc kubenswrapper[5117]: I0123 09:14:08.053492 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485994-msdx6" Jan 23 09:14:08 crc kubenswrapper[5117]: I0123 09:14:08.053572 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485994-msdx6" event={"ID":"3e459078-f382-4bad-bad1-1d440234381c","Type":"ContainerDied","Data":"1e966233815226115fb499665d75ecefdcc98f97044be43a71ae241782ceafdd"} Jan 23 09:14:08 crc kubenswrapper[5117]: I0123 09:14:08.053624 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1e966233815226115fb499665d75ecefdcc98f97044be43a71ae241782ceafdd" Jan 23 09:14:08 crc kubenswrapper[5117]: I0123 09:14:08.109785 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485988-bvkjd"] Jan 23 09:14:08 crc kubenswrapper[5117]: I0123 09:14:08.115767 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485988-bvkjd"] Jan 23 09:14:08 crc kubenswrapper[5117]: I0123 09:14:08.784308 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="6cd27b8c-4532-4d81-8e54-04c774c10f77" path="/var/lib/kubelet/pods/6cd27b8c-4532-4d81-8e54-04c774c10f77/volumes" Jan 23 09:14:15 crc kubenswrapper[5117]: I0123 09:14:15.436571 5117 scope.go:117] "RemoveContainer" containerID="089159a1d65087f5620429ba04d4f9e6b07cd65fbb29858b13b431545e678346" Jan 23 09:15:00 crc kubenswrapper[5117]: I0123 09:15:00.136893 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh"] Jan 23 09:15:00 crc kubenswrapper[5117]: I0123 09:15:00.141698 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3e459078-f382-4bad-bad1-1d440234381c" containerName="oc" Jan 23 09:15:00 crc kubenswrapper[5117]: I0123 09:15:00.141736 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3e459078-f382-4bad-bad1-1d440234381c" containerName="oc" Jan 23 09:15:00 crc kubenswrapper[5117]: I0123 09:15:00.142329 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3e459078-f382-4bad-bad1-1d440234381c" containerName="oc" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.441007 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.446452 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.447272 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.451275 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh"] Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.570726 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9681653e-40c6-4955-bb17-2f3965178544-secret-volume\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.570861 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9681653e-40c6-4955-bb17-2f3965178544-config-volume\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.571051 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kg76k\" (UniqueName: \"kubernetes.io/projected/9681653e-40c6-4955-bb17-2f3965178544-kube-api-access-kg76k\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.671872 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9681653e-40c6-4955-bb17-2f3965178544-config-volume\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.671972 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kg76k\" (UniqueName: \"kubernetes.io/projected/9681653e-40c6-4955-bb17-2f3965178544-kube-api-access-kg76k\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.672056 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9681653e-40c6-4955-bb17-2f3965178544-secret-volume\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.672783 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9681653e-40c6-4955-bb17-2f3965178544-config-volume\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.686390 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9681653e-40c6-4955-bb17-2f3965178544-secret-volume\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.701867 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kg76k\" (UniqueName: \"kubernetes.io/projected/9681653e-40c6-4955-bb17-2f3965178544-kube-api-access-kg76k\") pod \"collect-profiles-29485995-5f6jh\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:01 crc kubenswrapper[5117]: I0123 09:15:01.760337 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:02 crc kubenswrapper[5117]: I0123 09:15:02.192073 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh"] Jan 23 09:15:02 crc kubenswrapper[5117]: W0123 09:15:02.202124 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9681653e_40c6_4955_bb17_2f3965178544.slice/crio-1309e98eec0f717c484d6daaca31f48f647c0836e90eab3c3cad9c7d6e90e6d9 WatchSource:0}: Error finding container 1309e98eec0f717c484d6daaca31f48f647c0836e90eab3c3cad9c7d6e90e6d9: Status 404 returned error can't find the container with id 1309e98eec0f717c484d6daaca31f48f647c0836e90eab3c3cad9c7d6e90e6d9 Jan 23 09:15:02 crc kubenswrapper[5117]: I0123 09:15:02.204461 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:15:02 crc kubenswrapper[5117]: I0123 09:15:02.474611 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" event={"ID":"9681653e-40c6-4955-bb17-2f3965178544","Type":"ContainerStarted","Data":"6a60c99b61772adbdc0f24c7c286609123ded37a459500262cc8e2bc742ba57b"} Jan 23 09:15:02 crc kubenswrapper[5117]: I0123 09:15:02.474883 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" event={"ID":"9681653e-40c6-4955-bb17-2f3965178544","Type":"ContainerStarted","Data":"1309e98eec0f717c484d6daaca31f48f647c0836e90eab3c3cad9c7d6e90e6d9"} Jan 23 09:15:02 crc kubenswrapper[5117]: I0123 09:15:02.492043 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" podStartSLOduration=2.492021226 podStartE2EDuration="2.492021226s" podCreationTimestamp="2026-01-23 09:15:00 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:15:02.488239166 +0000 UTC m=+1314.244364202" watchObservedRunningTime="2026-01-23 09:15:02.492021226 +0000 UTC m=+1314.248146252" Jan 23 09:15:03 crc kubenswrapper[5117]: I0123 09:15:03.482802 5117 generic.go:358] "Generic (PLEG): container finished" podID="9681653e-40c6-4955-bb17-2f3965178544" containerID="6a60c99b61772adbdc0f24c7c286609123ded37a459500262cc8e2bc742ba57b" exitCode=0 Jan 23 09:15:03 crc kubenswrapper[5117]: I0123 09:15:03.482937 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" event={"ID":"9681653e-40c6-4955-bb17-2f3965178544","Type":"ContainerDied","Data":"6a60c99b61772adbdc0f24c7c286609123ded37a459500262cc8e2bc742ba57b"} Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.759922 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.812517 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9681653e-40c6-4955-bb17-2f3965178544-secret-volume\") pod \"9681653e-40c6-4955-bb17-2f3965178544\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.812622 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9681653e-40c6-4955-bb17-2f3965178544-config-volume\") pod \"9681653e-40c6-4955-bb17-2f3965178544\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.812653 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kg76k\" (UniqueName: \"kubernetes.io/projected/9681653e-40c6-4955-bb17-2f3965178544-kube-api-access-kg76k\") pod \"9681653e-40c6-4955-bb17-2f3965178544\" (UID: \"9681653e-40c6-4955-bb17-2f3965178544\") " Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.813325 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9681653e-40c6-4955-bb17-2f3965178544-config-volume" (OuterVolumeSpecName: "config-volume") pod "9681653e-40c6-4955-bb17-2f3965178544" (UID: "9681653e-40c6-4955-bb17-2f3965178544"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.826324 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9681653e-40c6-4955-bb17-2f3965178544-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "9681653e-40c6-4955-bb17-2f3965178544" (UID: "9681653e-40c6-4955-bb17-2f3965178544"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.826993 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9681653e-40c6-4955-bb17-2f3965178544-kube-api-access-kg76k" (OuterVolumeSpecName: "kube-api-access-kg76k") pod "9681653e-40c6-4955-bb17-2f3965178544" (UID: "9681653e-40c6-4955-bb17-2f3965178544"). InnerVolumeSpecName "kube-api-access-kg76k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.914588 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/9681653e-40c6-4955-bb17-2f3965178544-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.914900 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/9681653e-40c6-4955-bb17-2f3965178544-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:04 crc kubenswrapper[5117]: I0123 09:15:04.914995 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kg76k\" (UniqueName: \"kubernetes.io/projected/9681653e-40c6-4955-bb17-2f3965178544-kube-api-access-kg76k\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:05 crc kubenswrapper[5117]: I0123 09:15:05.500568 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" event={"ID":"9681653e-40c6-4955-bb17-2f3965178544","Type":"ContainerDied","Data":"1309e98eec0f717c484d6daaca31f48f647c0836e90eab3c3cad9c7d6e90e6d9"} Jan 23 09:15:05 crc kubenswrapper[5117]: I0123 09:15:05.500651 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29485995-5f6jh" Jan 23 09:15:05 crc kubenswrapper[5117]: I0123 09:15:05.500710 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1309e98eec0f717c484d6daaca31f48f647c0836e90eab3c3cad9c7d6e90e6d9" Jan 23 09:15:31 crc kubenswrapper[5117]: I0123 09:15:31.742065 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-78wk7"] Jan 23 09:15:31 crc kubenswrapper[5117]: I0123 09:15:31.744644 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9681653e-40c6-4955-bb17-2f3965178544" containerName="collect-profiles" Jan 23 09:15:31 crc kubenswrapper[5117]: I0123 09:15:31.744774 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9681653e-40c6-4955-bb17-2f3965178544" containerName="collect-profiles" Jan 23 09:15:31 crc kubenswrapper[5117]: I0123 09:15:31.744995 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="9681653e-40c6-4955-bb17-2f3965178544" containerName="collect-profiles" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.395049 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.404560 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78wk7"] Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.521592 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lwfc9\" (UniqueName: \"kubernetes.io/projected/549eccdb-30ba-4953-b688-d916e046b70c-kube-api-access-lwfc9\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.522006 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-catalog-content\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.522086 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-utilities\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.623315 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lwfc9\" (UniqueName: \"kubernetes.io/projected/549eccdb-30ba-4953-b688-d916e046b70c-kube-api-access-lwfc9\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.623392 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-catalog-content\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.623414 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-utilities\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.623904 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-catalog-content\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.623962 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-utilities\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.644523 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lwfc9\" (UniqueName: \"kubernetes.io/projected/549eccdb-30ba-4953-b688-d916e046b70c-kube-api-access-lwfc9\") pod \"redhat-operators-78wk7\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.715478 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:33 crc kubenswrapper[5117]: I0123 09:15:33.973146 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-78wk7"] Jan 23 09:15:34 crc kubenswrapper[5117]: I0123 09:15:34.672984 5117 generic.go:358] "Generic (PLEG): container finished" podID="549eccdb-30ba-4953-b688-d916e046b70c" containerID="587626899c1cd788335428866f643990b47165c54336e2d8d1b1260a337e7aab" exitCode=0 Jan 23 09:15:34 crc kubenswrapper[5117]: I0123 09:15:34.673117 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerDied","Data":"587626899c1cd788335428866f643990b47165c54336e2d8d1b1260a337e7aab"} Jan 23 09:15:34 crc kubenswrapper[5117]: I0123 09:15:34.673165 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerStarted","Data":"efbe6b01e99a7cfdacf9508d9b68545b619d639fc4c77b6f8e9f79f288c64feb"} Jan 23 09:15:36 crc kubenswrapper[5117]: I0123 09:15:36.689583 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerStarted","Data":"edcc1a9bd0902033677bf52c44ac2bd7951fd952738da8302d32500e298f4096"} Jan 23 09:15:37 crc kubenswrapper[5117]: I0123 09:15:37.696992 5117 generic.go:358] "Generic (PLEG): container finished" podID="549eccdb-30ba-4953-b688-d916e046b70c" containerID="edcc1a9bd0902033677bf52c44ac2bd7951fd952738da8302d32500e298f4096" exitCode=0 Jan 23 09:15:37 crc kubenswrapper[5117]: I0123 09:15:37.697106 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerDied","Data":"edcc1a9bd0902033677bf52c44ac2bd7951fd952738da8302d32500e298f4096"} Jan 23 09:15:38 crc kubenswrapper[5117]: I0123 09:15:38.706101 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerStarted","Data":"9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9"} Jan 23 09:15:38 crc kubenswrapper[5117]: I0123 09:15:38.730976 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-78wk7" podStartSLOduration=6.885603098 podStartE2EDuration="7.73095183s" podCreationTimestamp="2026-01-23 09:15:31 +0000 UTC" firstStartedPulling="2026-01-23 09:15:34.673979531 +0000 UTC m=+1346.430104557" lastFinishedPulling="2026-01-23 09:15:35.519328263 +0000 UTC m=+1347.275453289" observedRunningTime="2026-01-23 09:15:38.724181815 +0000 UTC m=+1350.480306861" watchObservedRunningTime="2026-01-23 09:15:38.73095183 +0000 UTC m=+1350.487076856" Jan 23 09:15:43 crc kubenswrapper[5117]: I0123 09:15:43.715601 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:43 crc kubenswrapper[5117]: I0123 09:15:43.717396 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:43 crc kubenswrapper[5117]: I0123 09:15:43.756916 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:43 crc kubenswrapper[5117]: I0123 09:15:43.803312 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:44 crc kubenswrapper[5117]: I0123 09:15:44.734732 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78wk7"] Jan 23 09:15:45 crc kubenswrapper[5117]: I0123 09:15:45.063191 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:15:45 crc kubenswrapper[5117]: I0123 09:15:45.063592 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:15:45 crc kubenswrapper[5117]: I0123 09:15:45.749623 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-78wk7" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="registry-server" containerID="cri-o://9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9" gracePeriod=2 Jan 23 09:15:53 crc kubenswrapper[5117]: E0123 09:15:53.758263 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9 is running failed: container process not found" containerID="9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 09:15:53 crc kubenswrapper[5117]: E0123 09:15:53.764169 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9 is running failed: container process not found" containerID="9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 09:15:53 crc kubenswrapper[5117]: E0123 09:15:53.764538 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9 is running failed: container process not found" containerID="9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 09:15:53 crc kubenswrapper[5117]: E0123 09:15:53.764584 5117 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9 is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-78wk7" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="registry-server" probeResult="unknown" Jan 23 09:15:53 crc kubenswrapper[5117]: I0123 09:15:53.766951 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-78wk7_549eccdb-30ba-4953-b688-d916e046b70c/registry-server/0.log" Jan 23 09:15:53 crc kubenswrapper[5117]: I0123 09:15:53.768552 5117 generic.go:358] "Generic (PLEG): container finished" podID="549eccdb-30ba-4953-b688-d916e046b70c" containerID="9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9" exitCode=-1 Jan 23 09:15:53 crc kubenswrapper[5117]: I0123 09:15:53.769048 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerDied","Data":"9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9"} Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.092144 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.151735 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-utilities\") pod \"549eccdb-30ba-4953-b688-d916e046b70c\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.151897 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lwfc9\" (UniqueName: \"kubernetes.io/projected/549eccdb-30ba-4953-b688-d916e046b70c-kube-api-access-lwfc9\") pod \"549eccdb-30ba-4953-b688-d916e046b70c\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.151969 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-catalog-content\") pod \"549eccdb-30ba-4953-b688-d916e046b70c\" (UID: \"549eccdb-30ba-4953-b688-d916e046b70c\") " Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.153335 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-utilities" (OuterVolumeSpecName: "utilities") pod "549eccdb-30ba-4953-b688-d916e046b70c" (UID: "549eccdb-30ba-4953-b688-d916e046b70c"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.163677 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.163974 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/549eccdb-30ba-4953-b688-d916e046b70c-kube-api-access-lwfc9" (OuterVolumeSpecName: "kube-api-access-lwfc9") pod "549eccdb-30ba-4953-b688-d916e046b70c" (UID: "549eccdb-30ba-4953-b688-d916e046b70c"). InnerVolumeSpecName "kube-api-access-lwfc9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.265066 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lwfc9\" (UniqueName: \"kubernetes.io/projected/549eccdb-30ba-4953-b688-d916e046b70c-kube-api-access-lwfc9\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.267865 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "549eccdb-30ba-4953-b688-d916e046b70c" (UID: "549eccdb-30ba-4953-b688-d916e046b70c"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.366608 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/549eccdb-30ba-4953-b688-d916e046b70c-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.779571 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-78wk7" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.779843 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-78wk7" event={"ID":"549eccdb-30ba-4953-b688-d916e046b70c","Type":"ContainerDied","Data":"efbe6b01e99a7cfdacf9508d9b68545b619d639fc4c77b6f8e9f79f288c64feb"} Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.780195 5117 scope.go:117] "RemoveContainer" containerID="9efec5052b0e6b47ebdf56437c7bbd73eb70c81787b47b3adf5e4a17aac3f2f9" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.800393 5117 scope.go:117] "RemoveContainer" containerID="edcc1a9bd0902033677bf52c44ac2bd7951fd952738da8302d32500e298f4096" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.822601 5117 scope.go:117] "RemoveContainer" containerID="587626899c1cd788335428866f643990b47165c54336e2d8d1b1260a337e7aab" Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.833894 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-78wk7"] Jan 23 09:15:54 crc kubenswrapper[5117]: I0123 09:15:54.840642 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-78wk7"] Jan 23 09:15:56 crc kubenswrapper[5117]: I0123 09:15:56.780617 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="549eccdb-30ba-4953-b688-d916e046b70c" path="/var/lib/kubelet/pods/549eccdb-30ba-4953-b688-d916e046b70c/volumes" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.258821 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485996-f6sm8"] Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.260637 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="registry-server" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.260750 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="registry-server" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.260840 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="extract-content" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.260912 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="extract-content" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.261020 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="extract-utilities" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.261094 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="extract-utilities" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.261331 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="549eccdb-30ba-4953-b688-d916e046b70c" containerName="registry-server" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.317852 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.320597 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.320809 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.320917 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.450681 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-cqkpv\" (UniqueName: \"kubernetes.io/projected/2712b004-702a-4d2c-b59f-6c7aee429a80-kube-api-access-cqkpv\") pod \"auto-csr-approver-29485996-f6sm8\" (UID: \"2712b004-702a-4d2c-b59f-6c7aee429a80\") " pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.552280 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-cqkpv\" (UniqueName: \"kubernetes.io/projected/2712b004-702a-4d2c-b59f-6c7aee429a80-kube-api-access-cqkpv\") pod \"auto-csr-approver-29485996-f6sm8\" (UID: \"2712b004-702a-4d2c-b59f-6c7aee429a80\") " pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.628898 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485996-f6sm8"] Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.649507 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-cqkpv\" (UniqueName: \"kubernetes.io/projected/2712b004-702a-4d2c-b59f-6c7aee429a80-kube-api-access-cqkpv\") pod \"auto-csr-approver-29485996-f6sm8\" (UID: \"2712b004-702a-4d2c-b59f-6c7aee429a80\") " pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:00 crc kubenswrapper[5117]: I0123 09:16:00.940652 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:01 crc kubenswrapper[5117]: I0123 09:16:01.225840 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485996-f6sm8"] Jan 23 09:16:01 crc kubenswrapper[5117]: I0123 09:16:01.826853 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" event={"ID":"2712b004-702a-4d2c-b59f-6c7aee429a80","Type":"ContainerStarted","Data":"563effe18a3b9e3aa825888b890318498ec5efd5bd5fd0b863649adaeb0dd550"} Jan 23 09:16:04 crc kubenswrapper[5117]: I0123 09:16:04.852556 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" event={"ID":"2712b004-702a-4d2c-b59f-6c7aee429a80","Type":"ContainerStarted","Data":"a2372b059e44c2ce7e5d009c0c37aa834d2f759ce27c17032429911837fdae5b"} Jan 23 09:16:04 crc kubenswrapper[5117]: I0123 09:16:04.867682 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" podStartSLOduration=1.950523921 podStartE2EDuration="4.867661205s" podCreationTimestamp="2026-01-23 09:16:00 +0000 UTC" firstStartedPulling="2026-01-23 09:16:01.250814513 +0000 UTC m=+1373.006939539" lastFinishedPulling="2026-01-23 09:16:04.167951797 +0000 UTC m=+1375.924076823" observedRunningTime="2026-01-23 09:16:04.865445723 +0000 UTC m=+1376.621570749" watchObservedRunningTime="2026-01-23 09:16:04.867661205 +0000 UTC m=+1376.623786231" Jan 23 09:16:05 crc kubenswrapper[5117]: I0123 09:16:05.862557 5117 generic.go:358] "Generic (PLEG): container finished" podID="2712b004-702a-4d2c-b59f-6c7aee429a80" containerID="a2372b059e44c2ce7e5d009c0c37aa834d2f759ce27c17032429911837fdae5b" exitCode=0 Jan 23 09:16:05 crc kubenswrapper[5117]: I0123 09:16:05.862752 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" event={"ID":"2712b004-702a-4d2c-b59f-6c7aee429a80","Type":"ContainerDied","Data":"a2372b059e44c2ce7e5d009c0c37aa834d2f759ce27c17032429911837fdae5b"} Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.563225 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.573094 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-cqkpv\" (UniqueName: \"kubernetes.io/projected/2712b004-702a-4d2c-b59f-6c7aee429a80-kube-api-access-cqkpv\") pod \"2712b004-702a-4d2c-b59f-6c7aee429a80\" (UID: \"2712b004-702a-4d2c-b59f-6c7aee429a80\") " Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.578871 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/2712b004-702a-4d2c-b59f-6c7aee429a80-kube-api-access-cqkpv" (OuterVolumeSpecName: "kube-api-access-cqkpv") pod "2712b004-702a-4d2c-b59f-6c7aee429a80" (UID: "2712b004-702a-4d2c-b59f-6c7aee429a80"). InnerVolumeSpecName "kube-api-access-cqkpv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.674640 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-cqkpv\" (UniqueName: \"kubernetes.io/projected/2712b004-702a-4d2c-b59f-6c7aee429a80-kube-api-access-cqkpv\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.886477 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" event={"ID":"2712b004-702a-4d2c-b59f-6c7aee429a80","Type":"ContainerDied","Data":"563effe18a3b9e3aa825888b890318498ec5efd5bd5fd0b863649adaeb0dd550"} Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.886525 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="563effe18a3b9e3aa825888b890318498ec5efd5bd5fd0b863649adaeb0dd550" Jan 23 09:16:07 crc kubenswrapper[5117]: I0123 09:16:07.886601 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485996-f6sm8" Jan 23 09:16:08 crc kubenswrapper[5117]: I0123 09:16:08.263271 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485990-zjdbs"] Jan 23 09:16:08 crc kubenswrapper[5117]: I0123 09:16:08.269270 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485990-zjdbs"] Jan 23 09:16:08 crc kubenswrapper[5117]: I0123 09:16:08.779012 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7868dc80-3681-42b1-b23a-e117f348ff9a" path="/var/lib/kubelet/pods/7868dc80-3681-42b1-b23a-e117f348ff9a/volumes" Jan 23 09:16:09 crc kubenswrapper[5117]: I0123 09:16:09.177423 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8rqdq"] Jan 23 09:16:09 crc kubenswrapper[5117]: I0123 09:16:09.178080 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="2712b004-702a-4d2c-b59f-6c7aee429a80" containerName="oc" Jan 23 09:16:09 crc kubenswrapper[5117]: I0123 09:16:09.178101 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="2712b004-702a-4d2c-b59f-6c7aee429a80" containerName="oc" Jan 23 09:16:09 crc kubenswrapper[5117]: I0123 09:16:09.180255 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="2712b004-702a-4d2c-b59f-6c7aee429a80" containerName="oc" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.042853 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rqdq"] Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.043496 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.110025 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-5p422\" (UniqueName: \"kubernetes.io/projected/0e981248-5bb2-4cb2-831b-bd6043e2c63d-kube-api-access-5p422\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.110085 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-utilities\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.110320 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-catalog-content\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.211941 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-catalog-content\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.212269 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-5p422\" (UniqueName: \"kubernetes.io/projected/0e981248-5bb2-4cb2-831b-bd6043e2c63d-kube-api-access-5p422\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.212365 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-utilities\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.212513 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-catalog-content\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.212840 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-utilities\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.232650 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-5p422\" (UniqueName: \"kubernetes.io/projected/0e981248-5bb2-4cb2-831b-bd6043e2c63d-kube-api-access-5p422\") pod \"certified-operators-8rqdq\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.368478 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.852032 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8rqdq"] Jan 23 09:16:10 crc kubenswrapper[5117]: I0123 09:16:10.906568 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rqdq" event={"ID":"0e981248-5bb2-4cb2-831b-bd6043e2c63d","Type":"ContainerStarted","Data":"20af9a2b008dea781431631b832d7e24c73e3d83aebcfdc90932cf88351c9465"} Jan 23 09:16:11 crc kubenswrapper[5117]: I0123 09:16:11.917082 5117 generic.go:358] "Generic (PLEG): container finished" podID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerID="09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479" exitCode=0 Jan 23 09:16:11 crc kubenswrapper[5117]: I0123 09:16:11.917159 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rqdq" event={"ID":"0e981248-5bb2-4cb2-831b-bd6043e2c63d","Type":"ContainerDied","Data":"09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479"} Jan 23 09:16:13 crc kubenswrapper[5117]: I0123 09:16:13.932400 5117 generic.go:358] "Generic (PLEG): container finished" podID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerID="c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93" exitCode=0 Jan 23 09:16:13 crc kubenswrapper[5117]: I0123 09:16:13.932482 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rqdq" event={"ID":"0e981248-5bb2-4cb2-831b-bd6043e2c63d","Type":"ContainerDied","Data":"c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93"} Jan 23 09:16:14 crc kubenswrapper[5117]: I0123 09:16:14.946206 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rqdq" event={"ID":"0e981248-5bb2-4cb2-831b-bd6043e2c63d","Type":"ContainerStarted","Data":"d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b"} Jan 23 09:16:14 crc kubenswrapper[5117]: I0123 09:16:14.965981 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8rqdq" podStartSLOduration=4.543903061 podStartE2EDuration="5.965949261s" podCreationTimestamp="2026-01-23 09:16:09 +0000 UTC" firstStartedPulling="2026-01-23 09:16:11.918874777 +0000 UTC m=+1383.674999843" lastFinishedPulling="2026-01-23 09:16:13.340921017 +0000 UTC m=+1385.097046043" observedRunningTime="2026-01-23 09:16:14.963996536 +0000 UTC m=+1386.720121562" watchObservedRunningTime="2026-01-23 09:16:14.965949261 +0000 UTC m=+1386.722074287" Jan 23 09:16:15 crc kubenswrapper[5117]: I0123 09:16:15.063642 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:16:15 crc kubenswrapper[5117]: I0123 09:16:15.063743 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:16:20 crc kubenswrapper[5117]: I0123 09:16:20.369610 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:20 crc kubenswrapper[5117]: I0123 09:16:20.369991 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:20 crc kubenswrapper[5117]: I0123 09:16:20.412842 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:21 crc kubenswrapper[5117]: I0123 09:16:21.026540 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:21 crc kubenswrapper[5117]: I0123 09:16:21.356679 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rqdq"] Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.002932 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8rqdq" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="registry-server" containerID="cri-o://d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b" gracePeriod=2 Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.850742 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.911510 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-5p422\" (UniqueName: \"kubernetes.io/projected/0e981248-5bb2-4cb2-831b-bd6043e2c63d-kube-api-access-5p422\") pod \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.911885 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-catalog-content\") pod \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.912025 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-utilities\") pod \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\" (UID: \"0e981248-5bb2-4cb2-831b-bd6043e2c63d\") " Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.913218 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-utilities" (OuterVolumeSpecName: "utilities") pod "0e981248-5bb2-4cb2-831b-bd6043e2c63d" (UID: "0e981248-5bb2-4cb2-831b-bd6043e2c63d"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.918658 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/0e981248-5bb2-4cb2-831b-bd6043e2c63d-kube-api-access-5p422" (OuterVolumeSpecName: "kube-api-access-5p422") pod "0e981248-5bb2-4cb2-831b-bd6043e2c63d" (UID: "0e981248-5bb2-4cb2-831b-bd6043e2c63d"). InnerVolumeSpecName "kube-api-access-5p422". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:16:23 crc kubenswrapper[5117]: I0123 09:16:23.945544 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "0e981248-5bb2-4cb2-831b-bd6043e2c63d" (UID: "0e981248-5bb2-4cb2-831b-bd6043e2c63d"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.009719 5117 generic.go:358] "Generic (PLEG): container finished" podID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerID="d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b" exitCode=0 Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.009766 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rqdq" event={"ID":"0e981248-5bb2-4cb2-831b-bd6043e2c63d","Type":"ContainerDied","Data":"d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b"} Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.009827 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8rqdq" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.009886 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8rqdq" event={"ID":"0e981248-5bb2-4cb2-831b-bd6043e2c63d","Type":"ContainerDied","Data":"20af9a2b008dea781431631b832d7e24c73e3d83aebcfdc90932cf88351c9465"} Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.009916 5117 scope.go:117] "RemoveContainer" containerID="d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.013947 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-5p422\" (UniqueName: \"kubernetes.io/projected/0e981248-5bb2-4cb2-831b-bd6043e2c63d-kube-api-access-5p422\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.013979 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.013988 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/0e981248-5bb2-4cb2-831b-bd6043e2c63d-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.028627 5117 scope.go:117] "RemoveContainer" containerID="c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.041592 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8rqdq"] Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.048939 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8rqdq"] Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.063706 5117 scope.go:117] "RemoveContainer" containerID="09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.078455 5117 scope.go:117] "RemoveContainer" containerID="d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b" Jan 23 09:16:24 crc kubenswrapper[5117]: E0123 09:16:24.078826 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b\": container with ID starting with d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b not found: ID does not exist" containerID="d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.078881 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b"} err="failed to get container status \"d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b\": rpc error: code = NotFound desc = could not find container \"d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b\": container with ID starting with d5cb2c7ebff9c03c96005eed00063f1c51f8844805050f0e7646a973b58f767b not found: ID does not exist" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.078911 5117 scope.go:117] "RemoveContainer" containerID="c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93" Jan 23 09:16:24 crc kubenswrapper[5117]: E0123 09:16:24.079205 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93\": container with ID starting with c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93 not found: ID does not exist" containerID="c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.079230 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93"} err="failed to get container status \"c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93\": rpc error: code = NotFound desc = could not find container \"c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93\": container with ID starting with c845d4b58d8d2bec537c65649a8b99de1ff415aadb5435c8a1cf0eec1340ab93 not found: ID does not exist" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.079245 5117 scope.go:117] "RemoveContainer" containerID="09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479" Jan 23 09:16:24 crc kubenswrapper[5117]: E0123 09:16:24.079459 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479\": container with ID starting with 09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479 not found: ID does not exist" containerID="09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.079484 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479"} err="failed to get container status \"09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479\": rpc error: code = NotFound desc = could not find container \"09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479\": container with ID starting with 09532cb46e84b7c780701ccbbaba5c6e0658e6bea897bf5a9936700589c86479 not found: ID does not exist" Jan 23 09:16:24 crc kubenswrapper[5117]: I0123 09:16:24.778932 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" path="/var/lib/kubelet/pods/0e981248-5bb2-4cb2-831b-bd6043e2c63d/volumes" Jan 23 09:16:45 crc kubenswrapper[5117]: I0123 09:16:45.063383 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:16:45 crc kubenswrapper[5117]: I0123 09:16:45.063920 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:16:45 crc kubenswrapper[5117]: I0123 09:16:45.063982 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:16:45 crc kubenswrapper[5117]: I0123 09:16:45.064957 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"26aa588973e2b79613df878d11883a627be7e28f6aad990afd10a1ff422b5018"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:16:45 crc kubenswrapper[5117]: I0123 09:16:45.065107 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://26aa588973e2b79613df878d11883a627be7e28f6aad990afd10a1ff422b5018" gracePeriod=600 Jan 23 09:16:55 crc kubenswrapper[5117]: I0123 09:16:55.180779 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-config-operator_machine-config-daemon-qfh6g_2d41b436-a78c-412b-b56c-54b8d73381e6/machine-config-daemon/6.log" Jan 23 09:16:55 crc kubenswrapper[5117]: I0123 09:16:55.183593 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="26aa588973e2b79613df878d11883a627be7e28f6aad990afd10a1ff422b5018" exitCode=-1 Jan 23 09:16:55 crc kubenswrapper[5117]: I0123 09:16:55.183685 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"26aa588973e2b79613df878d11883a627be7e28f6aad990afd10a1ff422b5018"} Jan 23 09:16:55 crc kubenswrapper[5117]: I0123 09:16:55.183994 5117 scope.go:117] "RemoveContainer" containerID="867cd9670ed8afceb2b30c6ee2fd23303a86c3de53c70730de425c5a0b5ccede" Jan 23 09:17:12 crc kubenswrapper[5117]: I0123 09:17:12.299784 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9"} Jan 23 09:17:15 crc kubenswrapper[5117]: I0123 09:17:15.561490 5117 scope.go:117] "RemoveContainer" containerID="c8d18002d5e2829e6368b878e9961b9d5e42fe3af27ba1379fd080334ae46163" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.969143 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-mz466"] Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970686 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="extract-content" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970706 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="extract-content" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970716 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="registry-server" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970723 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="registry-server" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970753 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="extract-utilities" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970762 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="extract-utilities" Jan 23 09:17:27 crc kubenswrapper[5117]: I0123 09:17:27.970990 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="0e981248-5bb2-4cb2-831b-bd6043e2c63d" containerName="registry-server" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.178803 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.186586 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mz466"] Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.241048 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7j9s9\" (UniqueName: \"kubernetes.io/projected/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-kube-api-access-7j9s9\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.241158 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-catalog-content\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.241190 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-utilities\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.342836 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7j9s9\" (UniqueName: \"kubernetes.io/projected/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-kube-api-access-7j9s9\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.342929 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-catalog-content\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.343499 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-utilities\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.344240 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-utilities\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.344490 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-catalog-content\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.363755 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7j9s9\" (UniqueName: \"kubernetes.io/projected/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-kube-api-access-7j9s9\") pod \"community-operators-mz466\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:33 crc kubenswrapper[5117]: I0123 09:17:33.498578 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:34 crc kubenswrapper[5117]: I0123 09:17:34.764209 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-mz466"] Jan 23 09:17:35 crc kubenswrapper[5117]: I0123 09:17:35.461300 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerStarted","Data":"d188a0a6431cfdcfac4b2dfc796a8a722c8c2b07591239a16d971dbec7aeb61d"} Jan 23 09:17:36 crc kubenswrapper[5117]: I0123 09:17:36.469404 5117 generic.go:358] "Generic (PLEG): container finished" podID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerID="13b0f026e2eab193e21429085d605de710d551d4ec36774481b14139f763ede8" exitCode=0 Jan 23 09:17:36 crc kubenswrapper[5117]: I0123 09:17:36.469513 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerDied","Data":"13b0f026e2eab193e21429085d605de710d551d4ec36774481b14139f763ede8"} Jan 23 09:17:39 crc kubenswrapper[5117]: I0123 09:17:39.491842 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerStarted","Data":"1efe1045e977d1306d689be32944d88a918d47bccdebfce0844218ea4abc948b"} Jan 23 09:17:40 crc kubenswrapper[5117]: I0123 09:17:40.505445 5117 generic.go:358] "Generic (PLEG): container finished" podID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerID="1efe1045e977d1306d689be32944d88a918d47bccdebfce0844218ea4abc948b" exitCode=0 Jan 23 09:17:40 crc kubenswrapper[5117]: I0123 09:17:40.505674 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerDied","Data":"1efe1045e977d1306d689be32944d88a918d47bccdebfce0844218ea4abc948b"} Jan 23 09:17:41 crc kubenswrapper[5117]: I0123 09:17:41.513167 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerStarted","Data":"95d10d77154bb1847b54f2053a80c7387e12fe531180bc71c9b6c1fdaba95c2a"} Jan 23 09:17:41 crc kubenswrapper[5117]: I0123 09:17:41.534084 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-mz466" podStartSLOduration=11.9408215 podStartE2EDuration="14.534064343s" podCreationTimestamp="2026-01-23 09:17:27 +0000 UTC" firstStartedPulling="2026-01-23 09:17:36.470307485 +0000 UTC m=+1468.226432511" lastFinishedPulling="2026-01-23 09:17:39.063550318 +0000 UTC m=+1470.819675354" observedRunningTime="2026-01-23 09:17:41.530339749 +0000 UTC m=+1473.286464775" watchObservedRunningTime="2026-01-23 09:17:41.534064343 +0000 UTC m=+1473.290189369" Jan 23 09:17:43 crc kubenswrapper[5117]: I0123 09:17:43.499560 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:43 crc kubenswrapper[5117]: I0123 09:17:43.499927 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:43 crc kubenswrapper[5117]: I0123 09:17:43.552843 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:49 crc kubenswrapper[5117]: I0123 09:17:49.570394 5117 generic.go:358] "Generic (PLEG): container finished" podID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerID="e2ad79f23ec79bd0ec92e29a9dae85b180d7b055742b2ad20b0eeafd39c63940" exitCode=0 Jan 23 09:17:49 crc kubenswrapper[5117]: I0123 09:17:49.570496 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerDied","Data":"e2ad79f23ec79bd0ec92e29a9dae85b180d7b055742b2ad20b0eeafd39c63940"} Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.837993 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.887041 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-run\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.887289 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-pull\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.887332 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-ca-bundles\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.887501 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-root\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.887652 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-node-pullsecrets\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.887741 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.888892 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.889141 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890350 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-blob-cache\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890450 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zgs75\" (UniqueName: \"kubernetes.io/projected/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-kube-api-access-zgs75\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890487 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-push\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890540 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildworkdir\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890585 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-proxy-ca-bundles\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890711 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-system-configs\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.890767 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildcachedir\") pod \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\" (UID: \"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe\") " Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.891521 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.891634 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.892341 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.892669 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.892691 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.893492 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.897666 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.901346 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-kube-api-access-zgs75" (OuterVolumeSpecName: "kube-api-access-zgs75") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "kube-api-access-zgs75". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.905072 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:17:50 crc kubenswrapper[5117]: I0123 09:17:50.905317 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993887 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993934 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993947 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993957 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993971 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zgs75\" (UniqueName: \"kubernetes.io/projected/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-kube-api-access-zgs75\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993982 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:50.993993 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:51.595752 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-core-2-build" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:51.595755 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-core-2-build" event={"ID":"7553de07-e4b6-45e2-a0fc-3a48aabd4cbe","Type":"ContainerDied","Data":"81b044ec48769ecda9e14fe517f118cfdf9ab924c4e214190f636cdf3a472d5a"} Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:51.596446 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="81b044ec48769ecda9e14fe517f118cfdf9ab924c4e214190f636cdf3a472d5a" Jan 23 09:17:51 crc kubenswrapper[5117]: I0123 09:17:51.977302 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:17:52 crc kubenswrapper[5117]: I0123 09:17:52.009186 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:53 crc kubenswrapper[5117]: I0123 09:17:53.564624 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" (UID: "7553de07-e4b6-45e2-a0fc-3a48aabd4cbe"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:17:53 crc kubenswrapper[5117]: I0123 09:17:53.633415 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/7553de07-e4b6-45e2-a0fc-3a48aabd4cbe-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:54 crc kubenswrapper[5117]: I0123 09:17:54.568331 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:54 crc kubenswrapper[5117]: I0123 09:17:54.606869 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mz466"] Jan 23 09:17:54 crc kubenswrapper[5117]: I0123 09:17:54.614356 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-mz466" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="registry-server" containerID="cri-o://95d10d77154bb1847b54f2053a80c7387e12fe531180bc71c9b6c1fdaba95c2a" gracePeriod=2 Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.647023 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648226 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="docker-build" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648245 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="docker-build" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648261 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="manage-dockerfile" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648269 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="manage-dockerfile" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648282 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="git-clone" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648290 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="git-clone" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.648433 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7553de07-e4b6-45e2-a0fc-3a48aabd4cbe" containerName="docker-build" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.945352 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.946147 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.949991 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-ca\"" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.949994 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-global-ca\"" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.950168 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:17:55 crc kubenswrapper[5117]: I0123 09:17:55.950349 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-1-sys-config\"" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.088834 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.088917 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.088982 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-push\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089003 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089027 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089048 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-pull\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089069 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jqzxj\" (UniqueName: \"kubernetes.io/projected/b7373543-6c97-490a-885d-65d02c93f6e4-kube-api-access-jqzxj\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089095 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089171 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089194 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089215 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.089256 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190413 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190469 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190504 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190530 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190574 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190602 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190671 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-push\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190728 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190760 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190758 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-node-pullsecrets\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190794 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-pull\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190842 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jqzxj\" (UniqueName: \"kubernetes.io/projected/b7373543-6c97-490a-885d-65d02c93f6e4-kube-api-access-jqzxj\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190876 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.190970 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-build-blob-cache\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.191035 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-buildcachedir\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.191195 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-run\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.191299 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-root\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.191340 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-system-configs\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.191377 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-proxy-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.191719 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-buildworkdir\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.192004 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-ca-bundles\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.200787 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-push\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.200804 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-pull\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.211272 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jqzxj\" (UniqueName: \"kubernetes.io/projected/b7373543-6c97-490a-885d-65d02c93f6e4-kube-api-access-jqzxj\") pod \"sg-bridge-1-build\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.331438 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.631084 5117 generic.go:358] "Generic (PLEG): container finished" podID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerID="95d10d77154bb1847b54f2053a80c7387e12fe531180bc71c9b6c1fdaba95c2a" exitCode=0 Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.631577 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerDied","Data":"95d10d77154bb1847b54f2053a80c7387e12fe531180bc71c9b6c1fdaba95c2a"} Jan 23 09:17:56 crc kubenswrapper[5117]: W0123 09:17:56.744633 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-podb7373543_6c97_490a_885d_65d02c93f6e4.slice/crio-9d12513f1b23621e671e0a0ad74bb38e241b25c5d4681d7797da8c7f695c9ea6 WatchSource:0}: Error finding container 9d12513f1b23621e671e0a0ad74bb38e241b25c5d4681d7797da8c7f695c9ea6: Status 404 returned error can't find the container with id 9d12513f1b23621e671e0a0ad74bb38e241b25c5d4681d7797da8c7f695c9ea6 Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.749311 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.772195 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.900898 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-catalog-content\") pod \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.901111 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-utilities\") pod \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.901234 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7j9s9\" (UniqueName: \"kubernetes.io/projected/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-kube-api-access-7j9s9\") pod \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\" (UID: \"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656\") " Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.902515 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-utilities" (OuterVolumeSpecName: "utilities") pod "d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" (UID: "d9d2ab8d-b08f-4c11-8a4d-89c08daa8656"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.908582 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-kube-api-access-7j9s9" (OuterVolumeSpecName: "kube-api-access-7j9s9") pod "d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" (UID: "d9d2ab8d-b08f-4c11-8a4d-89c08daa8656"). InnerVolumeSpecName "kube-api-access-7j9s9". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:17:56 crc kubenswrapper[5117]: I0123 09:17:56.956735 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" (UID: "d9d2ab8d-b08f-4c11-8a4d-89c08daa8656"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.003847 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.003882 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.003891 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7j9s9\" (UniqueName: \"kubernetes.io/projected/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656-kube-api-access-7j9s9\") on node \"crc\" DevicePath \"\"" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.640745 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-mz466" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.640762 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-mz466" event={"ID":"d9d2ab8d-b08f-4c11-8a4d-89c08daa8656","Type":"ContainerDied","Data":"d188a0a6431cfdcfac4b2dfc796a8a722c8c2b07591239a16d971dbec7aeb61d"} Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.641186 5117 scope.go:117] "RemoveContainer" containerID="95d10d77154bb1847b54f2053a80c7387e12fe531180bc71c9b6c1fdaba95c2a" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.643070 5117 generic.go:358] "Generic (PLEG): container finished" podID="b7373543-6c97-490a-885d-65d02c93f6e4" containerID="1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e" exitCode=0 Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.643276 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b7373543-6c97-490a-885d-65d02c93f6e4","Type":"ContainerDied","Data":"1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e"} Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.643372 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b7373543-6c97-490a-885d-65d02c93f6e4","Type":"ContainerStarted","Data":"9d12513f1b23621e671e0a0ad74bb38e241b25c5d4681d7797da8c7f695c9ea6"} Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.661694 5117 scope.go:117] "RemoveContainer" containerID="1efe1045e977d1306d689be32944d88a918d47bccdebfce0844218ea4abc948b" Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.692963 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-mz466"] Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.698452 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-mz466"] Jan 23 09:17:57 crc kubenswrapper[5117]: I0123 09:17:57.701336 5117 scope.go:117] "RemoveContainer" containerID="13b0f026e2eab193e21429085d605de710d551d4ec36774481b14139f763ede8" Jan 23 09:17:58 crc kubenswrapper[5117]: I0123 09:17:58.653087 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b7373543-6c97-490a-885d-65d02c93f6e4","Type":"ContainerStarted","Data":"ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486"} Jan 23 09:17:58 crc kubenswrapper[5117]: I0123 09:17:58.687344 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/sg-bridge-1-build" podStartSLOduration=3.687321785 podStartE2EDuration="3.687321785s" podCreationTimestamp="2026-01-23 09:17:55 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:17:58.678238521 +0000 UTC m=+1490.434363557" watchObservedRunningTime="2026-01-23 09:17:58.687321785 +0000 UTC m=+1490.443446811" Jan 23 09:17:58 crc kubenswrapper[5117]: I0123 09:17:58.780476 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" path="/var/lib/kubelet/pods/d9d2ab8d-b08f-4c11-8a4d-89c08daa8656/volumes" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.131688 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29485998-58xvp"] Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132539 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="extract-content" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132553 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="extract-content" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132572 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="extract-utilities" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132579 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="extract-utilities" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132624 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="registry-server" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132629 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="registry-server" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.132725 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d9d2ab8d-b08f-4c11-8a4d-89c08daa8656" containerName="registry-server" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.138620 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.150576 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485998-58xvp"] Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.150899 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.151127 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.151248 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.156582 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctv6l\" (UniqueName: \"kubernetes.io/projected/15395d2f-4140-4df5-a478-df783b84ba78-kube-api-access-ctv6l\") pod \"auto-csr-approver-29485998-58xvp\" (UID: \"15395d2f-4140-4df5-a478-df783b84ba78\") " pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.257526 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctv6l\" (UniqueName: \"kubernetes.io/projected/15395d2f-4140-4df5-a478-df783b84ba78-kube-api-access-ctv6l\") pod \"auto-csr-approver-29485998-58xvp\" (UID: \"15395d2f-4140-4df5-a478-df783b84ba78\") " pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.278958 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctv6l\" (UniqueName: \"kubernetes.io/projected/15395d2f-4140-4df5-a478-df783b84ba78-kube-api-access-ctv6l\") pod \"auto-csr-approver-29485998-58xvp\" (UID: \"15395d2f-4140-4df5-a478-df783b84ba78\") " pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.466228 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:00 crc kubenswrapper[5117]: I0123 09:18:00.671981 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29485998-58xvp"] Jan 23 09:18:00 crc kubenswrapper[5117]: W0123 09:18:00.679269 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod15395d2f_4140_4df5_a478_df783b84ba78.slice/crio-0a2e3ae014e800867a9da95b22bdd85e0e9d6d361ba631dbbd625a168db050b3 WatchSource:0}: Error finding container 0a2e3ae014e800867a9da95b22bdd85e0e9d6d361ba631dbbd625a168db050b3: Status 404 returned error can't find the container with id 0a2e3ae014e800867a9da95b22bdd85e0e9d6d361ba631dbbd625a168db050b3 Jan 23 09:18:01 crc kubenswrapper[5117]: I0123 09:18:01.677509 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485998-58xvp" event={"ID":"15395d2f-4140-4df5-a478-df783b84ba78","Type":"ContainerStarted","Data":"0a2e3ae014e800867a9da95b22bdd85e0e9d6d361ba631dbbd625a168db050b3"} Jan 23 09:18:02 crc kubenswrapper[5117]: I0123 09:18:02.687730 5117 generic.go:358] "Generic (PLEG): container finished" podID="15395d2f-4140-4df5-a478-df783b84ba78" containerID="3b4f2aaf37fe3348978504ad4f99d6609007e7e0b2c95671efb15e8b72df34ba" exitCode=0 Jan 23 09:18:02 crc kubenswrapper[5117]: I0123 09:18:02.687835 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485998-58xvp" event={"ID":"15395d2f-4140-4df5-a478-df783b84ba78","Type":"ContainerDied","Data":"3b4f2aaf37fe3348978504ad4f99d6609007e7e0b2c95671efb15e8b72df34ba"} Jan 23 09:18:03 crc kubenswrapper[5117]: I0123 09:18:03.929774 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:04 crc kubenswrapper[5117]: I0123 09:18:04.004352 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ctv6l\" (UniqueName: \"kubernetes.io/projected/15395d2f-4140-4df5-a478-df783b84ba78-kube-api-access-ctv6l\") pod \"15395d2f-4140-4df5-a478-df783b84ba78\" (UID: \"15395d2f-4140-4df5-a478-df783b84ba78\") " Jan 23 09:18:04 crc kubenswrapper[5117]: I0123 09:18:04.013523 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/15395d2f-4140-4df5-a478-df783b84ba78-kube-api-access-ctv6l" (OuterVolumeSpecName: "kube-api-access-ctv6l") pod "15395d2f-4140-4df5-a478-df783b84ba78" (UID: "15395d2f-4140-4df5-a478-df783b84ba78"). InnerVolumeSpecName "kube-api-access-ctv6l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:18:04 crc kubenswrapper[5117]: I0123 09:18:04.105602 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ctv6l\" (UniqueName: \"kubernetes.io/projected/15395d2f-4140-4df5-a478-df783b84ba78-kube-api-access-ctv6l\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:04 crc kubenswrapper[5117]: I0123 09:18:04.705088 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29485998-58xvp" Jan 23 09:18:04 crc kubenswrapper[5117]: I0123 09:18:04.705254 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29485998-58xvp" event={"ID":"15395d2f-4140-4df5-a478-df783b84ba78","Type":"ContainerDied","Data":"0a2e3ae014e800867a9da95b22bdd85e0e9d6d361ba631dbbd625a168db050b3"} Jan 23 09:18:04 crc kubenswrapper[5117]: I0123 09:18:04.705638 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0a2e3ae014e800867a9da95b22bdd85e0e9d6d361ba631dbbd625a168db050b3" Jan 23 09:18:05 crc kubenswrapper[5117]: I0123 09:18:05.000363 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485992-lhs64"] Jan 23 09:18:05 crc kubenswrapper[5117]: I0123 09:18:05.005103 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485992-lhs64"] Jan 23 09:18:05 crc kubenswrapper[5117]: I0123 09:18:05.846173 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 09:18:05 crc kubenswrapper[5117]: I0123 09:18:05.846437 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/sg-bridge-1-build" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" containerName="docker-build" containerID="cri-o://ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486" gracePeriod=30 Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.479826 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b7373543-6c97-490a-885d-65d02c93f6e4/docker-build/0.log" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.480725 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547184 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-root\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547239 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-proxy-ca-bundles\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547275 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-ca-bundles\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547332 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-push\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547453 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-node-pullsecrets\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547511 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jqzxj\" (UniqueName: \"kubernetes.io/projected/b7373543-6c97-490a-885d-65d02c93f6e4-kube-api-access-jqzxj\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547534 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-buildcachedir\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547558 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-buildworkdir\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547594 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-pull\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547582 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547757 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-build-blob-cache\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547777 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547821 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-run\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.547895 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-system-configs\") pod \"b7373543-6c97-490a-885d-65d02c93f6e4\" (UID: \"b7373543-6c97-490a-885d-65d02c93f6e4\") " Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.548036 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.548615 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.548645 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.548665 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.548678 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/b7373543-6c97-490a-885d-65d02c93f6e4-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.549012 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.549051 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.549312 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.549773 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.552657 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.552965 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.553968 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b7373543-6c97-490a-885d-65d02c93f6e4-kube-api-access-jqzxj" (OuterVolumeSpecName: "kube-api-access-jqzxj") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "kube-api-access-jqzxj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.606288 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "b7373543-6c97-490a-885d-65d02c93f6e4" (UID: "b7373543-6c97-490a-885d-65d02c93f6e4"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650075 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650120 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650151 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650180 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650191 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/b7373543-6c97-490a-885d-65d02c93f6e4-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650202 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650215 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jqzxj\" (UniqueName: \"kubernetes.io/projected/b7373543-6c97-490a-885d-65d02c93f6e4-kube-api-access-jqzxj\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650226 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/b7373543-6c97-490a-885d-65d02c93f6e4-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.650236 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/b7373543-6c97-490a-885d-65d02c93f6e4-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.730163 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-1-build_b7373543-6c97-490a-885d-65d02c93f6e4/docker-build/0.log" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.730989 5117 generic.go:358] "Generic (PLEG): container finished" podID="b7373543-6c97-490a-885d-65d02c93f6e4" containerID="ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486" exitCode=1 Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.731085 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-1-build" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.731112 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b7373543-6c97-490a-885d-65d02c93f6e4","Type":"ContainerDied","Data":"ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486"} Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.731208 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-1-build" event={"ID":"b7373543-6c97-490a-885d-65d02c93f6e4","Type":"ContainerDied","Data":"9d12513f1b23621e671e0a0ad74bb38e241b25c5d4681d7797da8c7f695c9ea6"} Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.731246 5117 scope.go:117] "RemoveContainer" containerID="ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.764976 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.775942 5117 scope.go:117] "RemoveContainer" containerID="1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.778730 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="28276b89-2814-41a6-847d-9048fefdb222" path="/var/lib/kubelet/pods/28276b89-2814-41a6-847d-9048fefdb222/volumes" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.779486 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/sg-bridge-1-build"] Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.849423 5117 scope.go:117] "RemoveContainer" containerID="ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486" Jan 23 09:18:06 crc kubenswrapper[5117]: E0123 09:18:06.849765 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486\": container with ID starting with ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486 not found: ID does not exist" containerID="ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.849793 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486"} err="failed to get container status \"ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486\": rpc error: code = NotFound desc = could not find container \"ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486\": container with ID starting with ea2569e43fe8bbc64c17344fbefc0e727f58e84f82dd6a98cf143470e06fe486 not found: ID does not exist" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.849815 5117 scope.go:117] "RemoveContainer" containerID="1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e" Jan 23 09:18:06 crc kubenswrapper[5117]: E0123 09:18:06.850400 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e\": container with ID starting with 1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e not found: ID does not exist" containerID="1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e" Jan 23 09:18:06 crc kubenswrapper[5117]: I0123 09:18:06.850428 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e"} err="failed to get container status \"1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e\": rpc error: code = NotFound desc = could not find container \"1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e\": container with ID starting with 1c85a9487c98aacb9091559f3e2e40e0f829ed4c738e957995d23c8005d1495e not found: ID does not exist" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.520238 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522048 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" containerName="docker-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522214 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" containerName="docker-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522385 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="15395d2f-4140-4df5-a478-df783b84ba78" containerName="oc" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522498 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="15395d2f-4140-4df5-a478-df783b84ba78" containerName="oc" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522596 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" containerName="manage-dockerfile" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522676 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" containerName="manage-dockerfile" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.522932 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="15395d2f-4140-4df5-a478-df783b84ba78" containerName="oc" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.523021 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" containerName="docker-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.920648 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.920768 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.923477 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-sys-config\"" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.923547 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-global-ca\"" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.923605 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.924670 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"sg-bridge-2-ca\"" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.964472 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.964649 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-pull\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.964789 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.964845 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.964864 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mn9vk\" (UniqueName: \"kubernetes.io/projected/be0920ca-88aa-4084-9a39-0ff84b6ade70-kube-api-access-mn9vk\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.964991 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.965083 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.965112 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-push\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.965180 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.965203 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.965373 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:07 crc kubenswrapper[5117]: I0123 09:18:07.965480 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067155 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067238 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067266 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-push\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067301 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067335 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067379 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067407 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067435 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067472 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-pull\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067528 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067568 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.067600 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mn9vk\" (UniqueName: \"kubernetes.io/projected/be0920ca-88aa-4084-9a39-0ff84b6ade70-kube-api-access-mn9vk\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.068693 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildcachedir\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.068807 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-node-pullsecrets\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.069318 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-root\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.069560 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildworkdir\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.069733 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-proxy-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.069786 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-system-configs\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.069873 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-run\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.069938 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-blob-cache\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.070027 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-ca-bundles\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.073765 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-push\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.074706 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-pull\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.092222 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mn9vk\" (UniqueName: \"kubernetes.io/projected/be0920ca-88aa-4084-9a39-0ff84b6ade70-kube-api-access-mn9vk\") pod \"sg-bridge-2-build\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.240118 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.641063 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/sg-bridge-2-build"] Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.746361 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerStarted","Data":"ac03093ed23ac3811db771299430ff1e95d57215012eb0e7c3dbc0b5d1501871"} Jan 23 09:18:08 crc kubenswrapper[5117]: I0123 09:18:08.780899 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b7373543-6c97-490a-885d-65d02c93f6e4" path="/var/lib/kubelet/pods/b7373543-6c97-490a-885d-65d02c93f6e4/volumes" Jan 23 09:18:09 crc kubenswrapper[5117]: I0123 09:18:09.758194 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerStarted","Data":"755d08306d233a74e011db9302465ed7335a0a63b81191eeb7bc6b9a43b8de0a"} Jan 23 09:18:10 crc kubenswrapper[5117]: I0123 09:18:10.769261 5117 generic.go:358] "Generic (PLEG): container finished" podID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerID="755d08306d233a74e011db9302465ed7335a0a63b81191eeb7bc6b9a43b8de0a" exitCode=0 Jan 23 09:18:10 crc kubenswrapper[5117]: I0123 09:18:10.769444 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerDied","Data":"755d08306d233a74e011db9302465ed7335a0a63b81191eeb7bc6b9a43b8de0a"} Jan 23 09:18:11 crc kubenswrapper[5117]: I0123 09:18:11.777917 5117 generic.go:358] "Generic (PLEG): container finished" podID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerID="983c12a6380955c07720e30995f17402c4d3a4bcfaad929c7b4160d8540b63f6" exitCode=0 Jan 23 09:18:11 crc kubenswrapper[5117]: I0123 09:18:11.777993 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerDied","Data":"983c12a6380955c07720e30995f17402c4d3a4bcfaad929c7b4160d8540b63f6"} Jan 23 09:18:11 crc kubenswrapper[5117]: I0123 09:18:11.808193 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_sg-bridge-2-build_be0920ca-88aa-4084-9a39-0ff84b6ade70/manage-dockerfile/0.log" Jan 23 09:18:12 crc kubenswrapper[5117]: I0123 09:18:12.786508 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerStarted","Data":"6eb17acd77768679f802e8d121564eb065b8fe34fb94b505ed864805b7c79571"} Jan 23 09:18:15 crc kubenswrapper[5117]: I0123 09:18:15.438907 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:18:15 crc kubenswrapper[5117]: I0123 09:18:15.438946 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:18:15 crc kubenswrapper[5117]: I0123 09:18:15.458046 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:18:15 crc kubenswrapper[5117]: I0123 09:18:15.458558 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:18:15 crc kubenswrapper[5117]: I0123 09:18:15.668332 5117 scope.go:117] "RemoveContainer" containerID="17c84c758fe0aeaacc237c9f0189ec30c1205e5d6ce6fd782eaf906f48ccd386" Jan 23 09:19:15 crc kubenswrapper[5117]: I0123 09:19:15.064251 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:19:15 crc kubenswrapper[5117]: I0123 09:19:15.064928 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:19:22 crc kubenswrapper[5117]: I0123 09:19:22.240217 5117 generic.go:358] "Generic (PLEG): container finished" podID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerID="6eb17acd77768679f802e8d121564eb065b8fe34fb94b505ed864805b7c79571" exitCode=0 Jan 23 09:19:22 crc kubenswrapper[5117]: I0123 09:19:22.240327 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerDied","Data":"6eb17acd77768679f802e8d121564eb065b8fe34fb94b505ed864805b7c79571"} Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.489760 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631103 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildworkdir\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631224 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-pull\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631253 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-system-configs\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631275 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-run\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631302 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-proxy-ca-bundles\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631326 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-push\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631352 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-root\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631385 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-node-pullsecrets\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631490 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildcachedir\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631524 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-ca-bundles\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631580 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mn9vk\" (UniqueName: \"kubernetes.io/projected/be0920ca-88aa-4084-9a39-0ff84b6ade70-kube-api-access-mn9vk\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.631600 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-blob-cache\") pod \"be0920ca-88aa-4084-9a39-0ff84b6ade70\" (UID: \"be0920ca-88aa-4084-9a39-0ff84b6ade70\") " Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.632375 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.632421 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.633488 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.633788 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.633911 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.634059 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.634998 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.643399 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.643476 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.643673 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/be0920ca-88aa-4084-9a39-0ff84b6ade70-kube-api-access-mn9vk" (OuterVolumeSpecName: "kube-api-access-mn9vk") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "kube-api-access-mn9vk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737092 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737155 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737167 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mn9vk\" (UniqueName: \"kubernetes.io/projected/be0920ca-88aa-4084-9a39-0ff84b6ade70-kube-api-access-mn9vk\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737179 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737189 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737200 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737210 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737219 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737230 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/be0920ca-88aa-4084-9a39-0ff84b6ade70-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.737240 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/be0920ca-88aa-4084-9a39-0ff84b6ade70-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.752905 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:23 crc kubenswrapper[5117]: I0123 09:19:23.838946 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:24 crc kubenswrapper[5117]: I0123 09:19:24.254289 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/sg-bridge-2-build" event={"ID":"be0920ca-88aa-4084-9a39-0ff84b6ade70","Type":"ContainerDied","Data":"ac03093ed23ac3811db771299430ff1e95d57215012eb0e7c3dbc0b5d1501871"} Jan 23 09:19:24 crc kubenswrapper[5117]: I0123 09:19:24.254868 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ac03093ed23ac3811db771299430ff1e95d57215012eb0e7c3dbc0b5d1501871" Jan 23 09:19:24 crc kubenswrapper[5117]: I0123 09:19:24.255048 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/sg-bridge-2-build" Jan 23 09:19:24 crc kubenswrapper[5117]: I0123 09:19:24.328890 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "be0920ca-88aa-4084-9a39-0ff84b6ade70" (UID: "be0920ca-88aa-4084-9a39-0ff84b6ade70"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:24 crc kubenswrapper[5117]: I0123 09:19:24.345943 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/be0920ca-88aa-4084-9a39-0ff84b6ade70-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.627814 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.628865 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="git-clone" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.628883 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="git-clone" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.628917 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="docker-build" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.628924 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="docker-build" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.628934 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="manage-dockerfile" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.628942 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="manage-dockerfile" Jan 23 09:19:27 crc kubenswrapper[5117]: I0123 09:19:27.629059 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="be0920ca-88aa-4084-9a39-0ff84b6ade70" containerName="docker-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.234262 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.234543 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.237217 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.237450 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-global-ca\"" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.237593 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-sys-config\"" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.242916 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-1-ca\"" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.398987 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399043 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6jtff\" (UniqueName: \"kubernetes.io/projected/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-kube-api-access-6jtff\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399067 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399117 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399238 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399317 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399377 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399479 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399523 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399541 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399595 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.399629 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500768 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500871 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6jtff\" (UniqueName: \"kubernetes.io/projected/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-kube-api-access-6jtff\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500897 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500942 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500966 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.500988 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501014 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501037 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildcachedir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501056 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501190 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501231 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501323 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501901 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-blob-cache\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.501950 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-root\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.502192 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildworkdir\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.502232 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-node-pullsecrets\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.502620 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-run\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.502652 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.503678 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-ca-bundles\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.503754 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-system-configs\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.510274 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-push\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.510454 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-pull\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.519960 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6jtff\" (UniqueName: \"kubernetes.io/projected/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-kube-api-access-6jtff\") pod \"prometheus-webhook-snmp-1-build\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.554116 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:28 crc kubenswrapper[5117]: I0123 09:19:28.769003 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 09:19:29 crc kubenswrapper[5117]: I0123 09:19:29.296331 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0","Type":"ContainerStarted","Data":"0f4089f03a1ca0553a917a73dbc15c3f79bb58cf48e5469e4193f6393cf68370"} Jan 23 09:19:30 crc kubenswrapper[5117]: I0123 09:19:30.308002 5117 generic.go:358] "Generic (PLEG): container finished" podID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerID="12063aedbd11370404ed4f96216eb9c187996566f7e5016d526d7ad83688c66f" exitCode=0 Jan 23 09:19:30 crc kubenswrapper[5117]: I0123 09:19:30.308064 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0","Type":"ContainerDied","Data":"12063aedbd11370404ed4f96216eb9c187996566f7e5016d526d7ad83688c66f"} Jan 23 09:19:31 crc kubenswrapper[5117]: I0123 09:19:31.316753 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0","Type":"ContainerStarted","Data":"6da246e1870d6a412d18e80cb8a2ec8d4d6cb8284be05028c77a4942150b61ec"} Jan 23 09:19:31 crc kubenswrapper[5117]: I0123 09:19:31.341033 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-1-build" podStartSLOduration=4.341012027 podStartE2EDuration="4.341012027s" podCreationTimestamp="2026-01-23 09:19:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:31.336420334 +0000 UTC m=+1583.092545360" watchObservedRunningTime="2026-01-23 09:19:31.341012027 +0000 UTC m=+1583.097137073" Jan 23 09:19:39 crc kubenswrapper[5117]: E0123 09:19:39.322202 5117 kubelet.go:2642] "Housekeeping took longer than expected" err="housekeeping took too long" expected="1s" actual="2.552s" Jan 23 09:19:39 crc kubenswrapper[5117]: I0123 09:19:39.332067 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 09:19:39 crc kubenswrapper[5117]: I0123 09:19:39.332813 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/prometheus-webhook-snmp-1-build" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerName="docker-build" containerID="cri-o://6da246e1870d6a412d18e80cb8a2ec8d4d6cb8284be05028c77a4942150b61ec" gracePeriod=30 Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.042464 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.302997 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.303162 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.305731 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-global-ca\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.305763 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-ca\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.306189 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-webhook-snmp-2-sys-config\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.375264 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377004 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377055 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377082 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377159 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377199 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377235 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377267 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377283 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377335 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377381 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.377433 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6hw7l\" (UniqueName: \"kubernetes.io/projected/bc483361-110f-4ab5-8733-665909b53b41-kube-api-access-6hw7l\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.383272 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_3ff1b6ab-b3eb-4774-9c41-829f32d20ba0/docker-build/0.log" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.383692 5117 generic.go:358] "Generic (PLEG): container finished" podID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerID="6da246e1870d6a412d18e80cb8a2ec8d4d6cb8284be05028c77a4942150b61ec" exitCode=1 Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.383762 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0","Type":"ContainerDied","Data":"6da246e1870d6a412d18e80cb8a2ec8d4d6cb8284be05028c77a4942150b61ec"} Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478487 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6hw7l\" (UniqueName: \"kubernetes.io/projected/bc483361-110f-4ab5-8733-665909b53b41-kube-api-access-6hw7l\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478559 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478579 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478610 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478632 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478718 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-buildcachedir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478794 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478840 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478855 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478905 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478920 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478937 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-buildworkdir\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.478992 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479043 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479096 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-node-pullsecrets\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479451 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-run\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479464 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-proxy-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479675 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-build-blob-cache\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479744 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-root\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.479911 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-system-configs\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.480338 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-ca-bundles\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.484171 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-pull\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.484231 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-push\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.498110 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6hw7l\" (UniqueName: \"kubernetes.io/projected/bc483361-110f-4ab5-8733-665909b53b41-kube-api-access-6hw7l\") pod \"prometheus-webhook-snmp-2-build\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.682818 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.790841 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_3ff1b6ab-b3eb-4774-9c41-829f32d20ba0/docker-build/0.log" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.791672 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.870447 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-2-build"] Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.886812 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-pull\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.886875 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-ca-bundles\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.886942 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildworkdir\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.886968 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-node-pullsecrets\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887005 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-system-configs\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887080 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-run\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887118 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-root\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887172 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-blob-cache\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887203 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-push\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887237 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-proxy-ca-bundles\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887288 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildcachedir\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887314 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6jtff\" (UniqueName: \"kubernetes.io/projected/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-kube-api-access-6jtff\") pod \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\" (UID: \"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0\") " Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.888051 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.888337 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.888352 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.888417 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.888756 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.889155 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.887339 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.889403 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.894533 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.894631 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.894904 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-kube-api-access-6jtff" (OuterVolumeSpecName: "kube-api-access-6jtff") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "kube-api-access-6jtff". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.939085 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" (UID: "3ff1b6ab-b3eb-4774-9c41-829f32d20ba0"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989733 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989778 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989850 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989897 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989910 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989930 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6jtff\" (UniqueName: \"kubernetes.io/projected/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-kube-api-access-6jtff\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989942 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989951 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989961 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989973 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989983 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:40 crc kubenswrapper[5117]: I0123 09:19:40.989992 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.391470 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerStarted","Data":"318d3dabb7e95267062a34c83031bb1253dc5a74e1fe328ead7707b2b6f1717e"} Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.392971 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-1-build_3ff1b6ab-b3eb-4774-9c41-829f32d20ba0/docker-build/0.log" Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.393616 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-1-build" event={"ID":"3ff1b6ab-b3eb-4774-9c41-829f32d20ba0","Type":"ContainerDied","Data":"0f4089f03a1ca0553a917a73dbc15c3f79bb58cf48e5469e4193f6393cf68370"} Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.393663 5117 scope.go:117] "RemoveContainer" containerID="6da246e1870d6a412d18e80cb8a2ec8d4d6cb8284be05028c77a4942150b61ec" Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.393687 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-1-build" Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.437330 5117 scope.go:117] "RemoveContainer" containerID="12063aedbd11370404ed4f96216eb9c187996566f7e5016d526d7ad83688c66f" Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.438543 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 09:19:41 crc kubenswrapper[5117]: I0123 09:19:41.444459 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/prometheus-webhook-snmp-1-build"] Jan 23 09:19:42 crc kubenswrapper[5117]: I0123 09:19:42.400422 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerStarted","Data":"4f0b83b06ac08f3c1957c6b4dd1c4cf1b618115523f3a1504ee2a202048cb11d"} Jan 23 09:19:42 crc kubenswrapper[5117]: I0123 09:19:42.786715 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" path="/var/lib/kubelet/pods/3ff1b6ab-b3eb-4774-9c41-829f32d20ba0/volumes" Jan 23 09:19:43 crc kubenswrapper[5117]: I0123 09:19:43.411547 5117 generic.go:358] "Generic (PLEG): container finished" podID="bc483361-110f-4ab5-8733-665909b53b41" containerID="4f0b83b06ac08f3c1957c6b4dd1c4cf1b618115523f3a1504ee2a202048cb11d" exitCode=0 Jan 23 09:19:43 crc kubenswrapper[5117]: I0123 09:19:43.411642 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerDied","Data":"4f0b83b06ac08f3c1957c6b4dd1c4cf1b618115523f3a1504ee2a202048cb11d"} Jan 23 09:19:44 crc kubenswrapper[5117]: I0123 09:19:44.420011 5117 generic.go:358] "Generic (PLEG): container finished" podID="bc483361-110f-4ab5-8733-665909b53b41" containerID="ff06b56300a0784a008d60b9a4009891cae317b28162f6af1ae7bd0a91a789dc" exitCode=0 Jan 23 09:19:44 crc kubenswrapper[5117]: I0123 09:19:44.421415 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerDied","Data":"ff06b56300a0784a008d60b9a4009891cae317b28162f6af1ae7bd0a91a789dc"} Jan 23 09:19:44 crc kubenswrapper[5117]: I0123 09:19:44.465820 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-webhook-snmp-2-build_bc483361-110f-4ab5-8733-665909b53b41/manage-dockerfile/0.log" Jan 23 09:19:45 crc kubenswrapper[5117]: I0123 09:19:45.063244 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:19:45 crc kubenswrapper[5117]: I0123 09:19:45.063601 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:19:45 crc kubenswrapper[5117]: I0123 09:19:45.430322 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerStarted","Data":"b7306ce9b93dba6890d9b1b8a7073c348d84160e4038cfcbde25cadb64832540"} Jan 23 09:19:45 crc kubenswrapper[5117]: I0123 09:19:45.456393 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-webhook-snmp-2-build" podStartSLOduration=5.456373547 podStartE2EDuration="5.456373547s" podCreationTimestamp="2026-01-23 09:19:40 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:19:45.453314075 +0000 UTC m=+1597.209439121" watchObservedRunningTime="2026-01-23 09:19:45.456373547 +0000 UTC m=+1597.212498583" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.139122 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486000-cskgv"] Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.141012 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerName="manage-dockerfile" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.141063 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerName="manage-dockerfile" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.141081 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerName="docker-build" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.141088 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerName="docker-build" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.141235 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3ff1b6ab-b3eb-4774-9c41-829f32d20ba0" containerName="docker-build" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.226748 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486000-cskgv"] Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.226924 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.232883 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.233175 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.233537 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.278902 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hmgld\" (UniqueName: \"kubernetes.io/projected/4ea574d9-c557-439d-be8a-bf6da4bdc517-kube-api-access-hmgld\") pod \"auto-csr-approver-29486000-cskgv\" (UID: \"4ea574d9-c557-439d-be8a-bf6da4bdc517\") " pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.379869 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hmgld\" (UniqueName: \"kubernetes.io/projected/4ea574d9-c557-439d-be8a-bf6da4bdc517-kube-api-access-hmgld\") pod \"auto-csr-approver-29486000-cskgv\" (UID: \"4ea574d9-c557-439d-be8a-bf6da4bdc517\") " pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.401535 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hmgld\" (UniqueName: \"kubernetes.io/projected/4ea574d9-c557-439d-be8a-bf6da4bdc517-kube-api-access-hmgld\") pod \"auto-csr-approver-29486000-cskgv\" (UID: \"4ea574d9-c557-439d-be8a-bf6da4bdc517\") " pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.546468 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:00 crc kubenswrapper[5117]: I0123 09:20:00.802031 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486000-cskgv"] Jan 23 09:20:01 crc kubenswrapper[5117]: I0123 09:20:01.546327 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486000-cskgv" event={"ID":"4ea574d9-c557-439d-be8a-bf6da4bdc517","Type":"ContainerStarted","Data":"b9db5e71330732c507136d5347879988286de2433e97f76444fd4a43425bc59e"} Jan 23 09:20:07 crc kubenswrapper[5117]: I0123 09:20:07.603559 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486000-cskgv" event={"ID":"4ea574d9-c557-439d-be8a-bf6da4bdc517","Type":"ContainerStarted","Data":"cbab37bcbe0c7a08de05019a18c98e913b189d959643d2998e1c077fa54fd662"} Jan 23 09:20:08 crc kubenswrapper[5117]: I0123 09:20:08.613351 5117 generic.go:358] "Generic (PLEG): container finished" podID="4ea574d9-c557-439d-be8a-bf6da4bdc517" containerID="cbab37bcbe0c7a08de05019a18c98e913b189d959643d2998e1c077fa54fd662" exitCode=0 Jan 23 09:20:08 crc kubenswrapper[5117]: I0123 09:20:08.613534 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486000-cskgv" event={"ID":"4ea574d9-c557-439d-be8a-bf6da4bdc517","Type":"ContainerDied","Data":"cbab37bcbe0c7a08de05019a18c98e913b189d959643d2998e1c077fa54fd662"} Jan 23 09:20:09 crc kubenswrapper[5117]: I0123 09:20:09.876348 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.027800 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hmgld\" (UniqueName: \"kubernetes.io/projected/4ea574d9-c557-439d-be8a-bf6da4bdc517-kube-api-access-hmgld\") pod \"4ea574d9-c557-439d-be8a-bf6da4bdc517\" (UID: \"4ea574d9-c557-439d-be8a-bf6da4bdc517\") " Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.037642 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4ea574d9-c557-439d-be8a-bf6da4bdc517-kube-api-access-hmgld" (OuterVolumeSpecName: "kube-api-access-hmgld") pod "4ea574d9-c557-439d-be8a-bf6da4bdc517" (UID: "4ea574d9-c557-439d-be8a-bf6da4bdc517"). InnerVolumeSpecName "kube-api-access-hmgld". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.133493 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hmgld\" (UniqueName: \"kubernetes.io/projected/4ea574d9-c557-439d-be8a-bf6da4bdc517-kube-api-access-hmgld\") on node \"crc\" DevicePath \"\"" Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.632954 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486000-cskgv" event={"ID":"4ea574d9-c557-439d-be8a-bf6da4bdc517","Type":"ContainerDied","Data":"b9db5e71330732c507136d5347879988286de2433e97f76444fd4a43425bc59e"} Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.633018 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="b9db5e71330732c507136d5347879988286de2433e97f76444fd4a43425bc59e" Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.633015 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486000-cskgv" Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.955159 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485994-msdx6"] Jan 23 09:20:10 crc kubenswrapper[5117]: I0123 09:20:10.969507 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485994-msdx6"] Jan 23 09:20:12 crc kubenswrapper[5117]: I0123 09:20:12.778188 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="3e459078-f382-4bad-bad1-1d440234381c" path="/var/lib/kubelet/pods/3e459078-f382-4bad-bad1-1d440234381c/volumes" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.063791 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.064559 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.064663 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.065880 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.065975 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" gracePeriod=600 Jan 23 09:20:15 crc kubenswrapper[5117]: E0123 09:20:15.299314 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.677597 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" exitCode=0 Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.677663 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9"} Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.677744 5117 scope.go:117] "RemoveContainer" containerID="26aa588973e2b79613df878d11883a627be7e28f6aad990afd10a1ff422b5018" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.678889 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:20:15 crc kubenswrapper[5117]: E0123 09:20:15.679439 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:20:15 crc kubenswrapper[5117]: I0123 09:20:15.801659 5117 scope.go:117] "RemoveContainer" containerID="d2c12d063354522630002ec95f20124ecc7b86028e85b2513a57a58c26361b0b" Jan 23 09:20:28 crc kubenswrapper[5117]: I0123 09:20:28.781183 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:20:28 crc kubenswrapper[5117]: E0123 09:20:28.782507 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:20:41 crc kubenswrapper[5117]: I0123 09:20:41.770918 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:20:41 crc kubenswrapper[5117]: E0123 09:20:41.772354 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:20:53 crc kubenswrapper[5117]: I0123 09:20:53.771179 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:20:53 crc kubenswrapper[5117]: E0123 09:20:53.772010 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:21:08 crc kubenswrapper[5117]: I0123 09:21:08.792507 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:21:08 crc kubenswrapper[5117]: E0123 09:21:08.793707 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:21:21 crc kubenswrapper[5117]: I0123 09:21:21.771596 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:21:21 crc kubenswrapper[5117]: E0123 09:21:21.772935 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:21:34 crc kubenswrapper[5117]: I0123 09:21:34.770717 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:21:34 crc kubenswrapper[5117]: E0123 09:21:34.771901 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:21:40 crc kubenswrapper[5117]: I0123 09:21:40.333363 5117 generic.go:358] "Generic (PLEG): container finished" podID="bc483361-110f-4ab5-8733-665909b53b41" containerID="b7306ce9b93dba6890d9b1b8a7073c348d84160e4038cfcbde25cadb64832540" exitCode=0 Jan 23 09:21:40 crc kubenswrapper[5117]: I0123 09:21:40.333450 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerDied","Data":"b7306ce9b93dba6890d9b1b8a7073c348d84160e4038cfcbde25cadb64832540"} Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.588022 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678496 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-build-blob-cache\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678555 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-system-configs\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678601 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-root\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678646 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-pull\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678709 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-buildworkdir\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678734 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-push\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678758 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-run\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678800 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-buildcachedir\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678824 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-node-pullsecrets\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678845 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-ca-bundles\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678908 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-proxy-ca-bundles\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.678930 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6hw7l\" (UniqueName: \"kubernetes.io/projected/bc483361-110f-4ab5-8733-665909b53b41-kube-api-access-6hw7l\") pod \"bc483361-110f-4ab5-8733-665909b53b41\" (UID: \"bc483361-110f-4ab5-8733-665909b53b41\") " Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.679528 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.679571 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.680556 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.680638 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.680846 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.681070 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.682171 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.684226 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.693874 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/bc483361-110f-4ab5-8733-665909b53b41-kube-api-access-6hw7l" (OuterVolumeSpecName: "kube-api-access-6hw7l") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "kube-api-access-6hw7l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.693946 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780373 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780408 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/bc483361-110f-4ab5-8733-665909b53b41-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780417 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780428 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780438 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6hw7l\" (UniqueName: \"kubernetes.io/projected/bc483361-110f-4ab5-8733-665909b53b41-kube-api-access-6hw7l\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780448 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/bc483361-110f-4ab5-8733-665909b53b41-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780457 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780468 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780477 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/bc483361-110f-4ab5-8733-665909b53b41-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.780485 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.782119 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:41 crc kubenswrapper[5117]: I0123 09:21:41.882261 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:42 crc kubenswrapper[5117]: I0123 09:21:42.350647 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-webhook-snmp-2-build" event={"ID":"bc483361-110f-4ab5-8733-665909b53b41","Type":"ContainerDied","Data":"318d3dabb7e95267062a34c83031bb1253dc5a74e1fe328ead7707b2b6f1717e"} Jan 23 09:21:42 crc kubenswrapper[5117]: I0123 09:21:42.350693 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="318d3dabb7e95267062a34c83031bb1253dc5a74e1fe328ead7707b2b6f1717e" Jan 23 09:21:42 crc kubenswrapper[5117]: I0123 09:21:42.350763 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-webhook-snmp-2-build" Jan 23 09:21:42 crc kubenswrapper[5117]: I0123 09:21:42.631720 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "bc483361-110f-4ab5-8733-665909b53b41" (UID: "bc483361-110f-4ab5-8733-665909b53b41"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:42 crc kubenswrapper[5117]: I0123 09:21:42.692367 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/bc483361-110f-4ab5-8733-665909b53b41-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:46 crc kubenswrapper[5117]: I0123 09:21:46.770699 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:21:46 crc kubenswrapper[5117]: E0123 09:21:46.771054 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.480742 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481347 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="manage-dockerfile" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481360 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="manage-dockerfile" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481374 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4ea574d9-c557-439d-be8a-bf6da4bdc517" containerName="oc" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481380 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="4ea574d9-c557-439d-be8a-bf6da4bdc517" containerName="oc" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481404 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="git-clone" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481410 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="git-clone" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481424 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="docker-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481430 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="docker-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481531 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="4ea574d9-c557-439d-be8a-bf6da4bdc517" containerName="oc" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.481549 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="bc483361-110f-4ab5-8733-665909b53b41" containerName="docker-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.741035 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.741401 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.743930 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-sys-config\"" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.743956 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.744719 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-global-ca\"" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.749219 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-1-ca\"" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.824944 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825018 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825062 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825314 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825420 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825720 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825918 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.825981 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.826047 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.826082 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.826162 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.826264 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-bnwmj\" (UniqueName: \"kubernetes.io/projected/9251f311-df90-4161-9964-53ca7054df49-kube-api-access-bnwmj\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.927559 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-bnwmj\" (UniqueName: \"kubernetes.io/projected/9251f311-df90-4161-9964-53ca7054df49-kube-api-access-bnwmj\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.927641 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.927676 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.927700 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928209 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-build-blob-cache\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928268 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928315 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928380 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928435 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928465 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928537 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928586 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928629 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.928930 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.929004 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-buildcachedir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.929057 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-buildworkdir\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.929575 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-system-configs\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.929670 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-ca-bundles\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.929919 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-run\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.930065 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-node-pullsecrets\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.930260 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-root\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.936449 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.936486 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:51 crc kubenswrapper[5117]: I0123 09:21:51.949792 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-bnwmj\" (UniqueName: \"kubernetes.io/projected/9251f311-df90-4161-9964-53ca7054df49-kube-api-access-bnwmj\") pod \"service-telemetry-operator-bundle-1-build\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:52 crc kubenswrapper[5117]: I0123 09:21:52.062789 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:52 crc kubenswrapper[5117]: I0123 09:21:52.275946 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 23 09:21:52 crc kubenswrapper[5117]: I0123 09:21:52.282860 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:21:52 crc kubenswrapper[5117]: I0123 09:21:52.439951 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"9251f311-df90-4161-9964-53ca7054df49","Type":"ContainerStarted","Data":"11388194f826e83b65e263b97da260c88d5a3c9b3b54410738074b3564a42bbe"} Jan 23 09:21:53 crc kubenswrapper[5117]: I0123 09:21:53.447175 5117 generic.go:358] "Generic (PLEG): container finished" podID="9251f311-df90-4161-9964-53ca7054df49" containerID="60ee385069963db5254faf81bc0ad5b8338e82036d199c1edbbd7022ad057d35" exitCode=0 Jan 23 09:21:53 crc kubenswrapper[5117]: I0123 09:21:53.447253 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"9251f311-df90-4161-9964-53ca7054df49","Type":"ContainerDied","Data":"60ee385069963db5254faf81bc0ad5b8338e82036d199c1edbbd7022ad057d35"} Jan 23 09:21:54 crc kubenswrapper[5117]: I0123 09:21:54.458550 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_9251f311-df90-4161-9964-53ca7054df49/docker-build/0.log" Jan 23 09:21:54 crc kubenswrapper[5117]: I0123 09:21:54.459293 5117 generic.go:358] "Generic (PLEG): container finished" podID="9251f311-df90-4161-9964-53ca7054df49" containerID="0205e928a4eefcc626b5f331383ff7fdc89d067e29bed8bf9151483455cdb845" exitCode=1 Jan 23 09:21:54 crc kubenswrapper[5117]: I0123 09:21:54.459421 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"9251f311-df90-4161-9964-53ca7054df49","Type":"ContainerDied","Data":"0205e928a4eefcc626b5f331383ff7fdc89d067e29bed8bf9151483455cdb845"} Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.721831 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_9251f311-df90-4161-9964-53ca7054df49/docker-build/0.log" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.722228 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892425 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-run\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892509 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-build-blob-cache\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892566 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-bnwmj\" (UniqueName: \"kubernetes.io/projected/9251f311-df90-4161-9964-53ca7054df49-kube-api-access-bnwmj\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892596 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-proxy-ca-bundles\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892626 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-node-pullsecrets\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892665 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-buildworkdir\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892777 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892861 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-root\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.892993 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-ca-bundles\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893038 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-buildcachedir\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893095 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-system-configs\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893118 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893157 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-push\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893198 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-pull\") pod \"9251f311-df90-4161-9964-53ca7054df49\" (UID: \"9251f311-df90-4161-9964-53ca7054df49\") " Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893210 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893593 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893810 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893833 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893824 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893902 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893914 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893932 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/9251f311-df90-4161-9964-53ca7054df49-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.893948 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.894230 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.896756 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.899623 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.900736 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.901015 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9251f311-df90-4161-9964-53ca7054df49-kube-api-access-bnwmj" (OuterVolumeSpecName: "kube-api-access-bnwmj") pod "9251f311-df90-4161-9964-53ca7054df49" (UID: "9251f311-df90-4161-9964-53ca7054df49"). InnerVolumeSpecName "kube-api-access-bnwmj". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995569 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995640 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995660 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/9251f311-df90-4161-9964-53ca7054df49-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995677 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995690 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-bnwmj\" (UniqueName: \"kubernetes.io/projected/9251f311-df90-4161-9964-53ca7054df49-kube-api-access-bnwmj\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995703 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995715 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/9251f311-df90-4161-9964-53ca7054df49-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:55 crc kubenswrapper[5117]: I0123 09:21:55.995725 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/9251f311-df90-4161-9964-53ca7054df49-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:21:56 crc kubenswrapper[5117]: I0123 09:21:56.475581 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-1-build_9251f311-df90-4161-9964-53ca7054df49/docker-build/0.log" Jan 23 09:21:56 crc kubenswrapper[5117]: I0123 09:21:56.476518 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-1-build" Jan 23 09:21:56 crc kubenswrapper[5117]: I0123 09:21:56.476530 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-1-build" event={"ID":"9251f311-df90-4161-9964-53ca7054df49","Type":"ContainerDied","Data":"11388194f826e83b65e263b97da260c88d5a3c9b3b54410738074b3564a42bbe"} Jan 23 09:21:56 crc kubenswrapper[5117]: I0123 09:21:56.476578 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="11388194f826e83b65e263b97da260c88d5a3c9b3b54410738074b3564a42bbe" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.149029 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486002-ddpmt"] Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.150058 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9251f311-df90-4161-9964-53ca7054df49" containerName="docker-build" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.150076 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9251f311-df90-4161-9964-53ca7054df49" containerName="docker-build" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.150116 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9251f311-df90-4161-9964-53ca7054df49" containerName="manage-dockerfile" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.150149 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9251f311-df90-4161-9964-53ca7054df49" containerName="manage-dockerfile" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.150313 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="9251f311-df90-4161-9964-53ca7054df49" containerName="docker-build" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.397772 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486002-ddpmt"] Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.397964 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.401688 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.401965 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.403317 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.562028 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-n2whh\" (UniqueName: \"kubernetes.io/projected/4f0c8992-ec3a-49be-9dfd-2801a6083d7a-kube-api-access-n2whh\") pod \"auto-csr-approver-29486002-ddpmt\" (UID: \"4f0c8992-ec3a-49be-9dfd-2801a6083d7a\") " pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.663814 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-n2whh\" (UniqueName: \"kubernetes.io/projected/4f0c8992-ec3a-49be-9dfd-2801a6083d7a-kube-api-access-n2whh\") pod \"auto-csr-approver-29486002-ddpmt\" (UID: \"4f0c8992-ec3a-49be-9dfd-2801a6083d7a\") " pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.694010 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-n2whh\" (UniqueName: \"kubernetes.io/projected/4f0c8992-ec3a-49be-9dfd-2801a6083d7a-kube-api-access-n2whh\") pod \"auto-csr-approver-29486002-ddpmt\" (UID: \"4f0c8992-ec3a-49be-9dfd-2801a6083d7a\") " pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.715705 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.772488 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:22:00 crc kubenswrapper[5117]: E0123 09:22:00.772786 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:22:00 crc kubenswrapper[5117]: I0123 09:22:00.930525 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486002-ddpmt"] Jan 23 09:22:01 crc kubenswrapper[5117]: I0123 09:22:01.517041 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" event={"ID":"4f0c8992-ec3a-49be-9dfd-2801a6083d7a","Type":"ContainerStarted","Data":"ad54faa2b833b6c00bcc522ce3773e3439ea47a3a8fc7e20fe8b5e59778dd401"} Jan 23 09:22:02 crc kubenswrapper[5117]: I0123 09:22:02.000699 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 23 09:22:02 crc kubenswrapper[5117]: I0123 09:22:02.008954 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-1-build"] Jan 23 09:22:02 crc kubenswrapper[5117]: I0123 09:22:02.783403 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9251f311-df90-4161-9964-53ca7054df49" path="/var/lib/kubelet/pods/9251f311-df90-4161-9964-53ca7054df49/volumes" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.533488 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" event={"ID":"4f0c8992-ec3a-49be-9dfd-2801a6083d7a","Type":"ContainerStarted","Data":"bbe2a9fb45a14a8d8d1ee7a74daef83e236d3110ce459b6d6de8a0fe4514f6c1"} Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.547987 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" podStartSLOduration=1.414840818 podStartE2EDuration="3.547969465s" podCreationTimestamp="2026-01-23 09:22:00 +0000 UTC" firstStartedPulling="2026-01-23 09:22:00.933891367 +0000 UTC m=+1732.690016393" lastFinishedPulling="2026-01-23 09:22:03.067020014 +0000 UTC m=+1734.823145040" observedRunningTime="2026-01-23 09:22:03.546190194 +0000 UTC m=+1735.302315220" watchObservedRunningTime="2026-01-23 09:22:03.547969465 +0000 UTC m=+1735.304094491" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.588230 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.593719 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.595902 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-global-ca\"" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.596040 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.596389 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-ca\"" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.596550 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-bundle-2-sys-config\"" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.605099 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707052 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707094 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707116 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707174 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707202 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707219 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707242 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-gm8bb\" (UniqueName: \"kubernetes.io/projected/45221e50-21d3-404c-9fd1-ee75abc5ff2f-kube-api-access-gm8bb\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707261 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707284 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707553 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707592 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.707694 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.808856 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809010 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809054 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-node-pullsecrets\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809084 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809225 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809313 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809766 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809794 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-system-configs\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809636 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildcachedir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.809979 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.810107 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-blob-cache\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.810293 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildworkdir\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.810675 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-proxy-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.810861 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-gm8bb\" (UniqueName: \"kubernetes.io/projected/45221e50-21d3-404c-9fd1-ee75abc5ff2f-kube-api-access-gm8bb\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.811863 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.811974 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.812516 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.813219 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.812815 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-ca-bundles\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.813012 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-root\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.813671 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-run\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.816442 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.828883 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-push\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.840273 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-gm8bb\" (UniqueName: \"kubernetes.io/projected/45221e50-21d3-404c-9fd1-ee75abc5ff2f-kube-api-access-gm8bb\") pod \"service-telemetry-operator-bundle-2-build\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:03 crc kubenswrapper[5117]: I0123 09:22:03.932960 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:04 crc kubenswrapper[5117]: I0123 09:22:04.137773 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-bundle-2-build"] Jan 23 09:22:04 crc kubenswrapper[5117]: I0123 09:22:04.541327 5117 generic.go:358] "Generic (PLEG): container finished" podID="4f0c8992-ec3a-49be-9dfd-2801a6083d7a" containerID="bbe2a9fb45a14a8d8d1ee7a74daef83e236d3110ce459b6d6de8a0fe4514f6c1" exitCode=0 Jan 23 09:22:04 crc kubenswrapper[5117]: I0123 09:22:04.541385 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" event={"ID":"4f0c8992-ec3a-49be-9dfd-2801a6083d7a","Type":"ContainerDied","Data":"bbe2a9fb45a14a8d8d1ee7a74daef83e236d3110ce459b6d6de8a0fe4514f6c1"} Jan 23 09:22:04 crc kubenswrapper[5117]: I0123 09:22:04.543381 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerStarted","Data":"3ec4b867db7ec25e3338c2a1956667df7d777f7d3065dd6aaba8d20d198ca37b"} Jan 23 09:22:04 crc kubenswrapper[5117]: I0123 09:22:04.543423 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerStarted","Data":"a62b89eced4ca14222f26065f8f90da10ff949311f6db53a9701a92d3ea2bdf3"} Jan 23 09:22:05 crc kubenswrapper[5117]: I0123 09:22:05.552519 5117 generic.go:358] "Generic (PLEG): container finished" podID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerID="3ec4b867db7ec25e3338c2a1956667df7d777f7d3065dd6aaba8d20d198ca37b" exitCode=0 Jan 23 09:22:05 crc kubenswrapper[5117]: I0123 09:22:05.552568 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerDied","Data":"3ec4b867db7ec25e3338c2a1956667df7d777f7d3065dd6aaba8d20d198ca37b"} Jan 23 09:22:05 crc kubenswrapper[5117]: I0123 09:22:05.823704 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:05 crc kubenswrapper[5117]: I0123 09:22:05.945205 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-n2whh\" (UniqueName: \"kubernetes.io/projected/4f0c8992-ec3a-49be-9dfd-2801a6083d7a-kube-api-access-n2whh\") pod \"4f0c8992-ec3a-49be-9dfd-2801a6083d7a\" (UID: \"4f0c8992-ec3a-49be-9dfd-2801a6083d7a\") " Jan 23 09:22:05 crc kubenswrapper[5117]: I0123 09:22:05.964309 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/4f0c8992-ec3a-49be-9dfd-2801a6083d7a-kube-api-access-n2whh" (OuterVolumeSpecName: "kube-api-access-n2whh") pod "4f0c8992-ec3a-49be-9dfd-2801a6083d7a" (UID: "4f0c8992-ec3a-49be-9dfd-2801a6083d7a"). InnerVolumeSpecName "kube-api-access-n2whh". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.046598 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-n2whh\" (UniqueName: \"kubernetes.io/projected/4f0c8992-ec3a-49be-9dfd-2801a6083d7a-kube-api-access-n2whh\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.560307 5117 generic.go:358] "Generic (PLEG): container finished" podID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerID="d8c1fc13fbee78a66ced4e30fac3b88f263d72e50244fd129c7891b27cccf563" exitCode=0 Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.560446 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerDied","Data":"d8c1fc13fbee78a66ced4e30fac3b88f263d72e50244fd129c7891b27cccf563"} Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.562993 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.562990 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486002-ddpmt" event={"ID":"4f0c8992-ec3a-49be-9dfd-2801a6083d7a","Type":"ContainerDied","Data":"ad54faa2b833b6c00bcc522ce3773e3439ea47a3a8fc7e20fe8b5e59778dd401"} Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.563179 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="ad54faa2b833b6c00bcc522ce3773e3439ea47a3a8fc7e20fe8b5e59778dd401" Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.605863 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-bundle-2-build_45221e50-21d3-404c-9fd1-ee75abc5ff2f/manage-dockerfile/0.log" Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.615503 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485996-f6sm8"] Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.620330 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485996-f6sm8"] Jan 23 09:22:06 crc kubenswrapper[5117]: I0123 09:22:06.777917 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="2712b004-702a-4d2c-b59f-6c7aee429a80" path="/var/lib/kubelet/pods/2712b004-702a-4d2c-b59f-6c7aee429a80/volumes" Jan 23 09:22:07 crc kubenswrapper[5117]: I0123 09:22:07.576409 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerStarted","Data":"b54e88d706a23c9ed00916a8b8d00b712227e5d9f82947cb088e396692851187"} Jan 23 09:22:07 crc kubenswrapper[5117]: I0123 09:22:07.619253 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-bundle-2-build" podStartSLOduration=4.619230283 podStartE2EDuration="4.619230283s" podCreationTimestamp="2026-01-23 09:22:03 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:22:07.612255903 +0000 UTC m=+1739.368380949" watchObservedRunningTime="2026-01-23 09:22:07.619230283 +0000 UTC m=+1739.375355319" Jan 23 09:22:10 crc kubenswrapper[5117]: I0123 09:22:10.596396 5117 generic.go:358] "Generic (PLEG): container finished" podID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerID="b54e88d706a23c9ed00916a8b8d00b712227e5d9f82947cb088e396692851187" exitCode=0 Jan 23 09:22:10 crc kubenswrapper[5117]: I0123 09:22:10.596528 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerDied","Data":"b54e88d706a23c9ed00916a8b8d00b712227e5d9f82947cb088e396692851187"} Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.830847 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.925769 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-root\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.925838 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-ca-bundles\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.925893 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-pull\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.925953 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildworkdir\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.925981 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-node-pullsecrets\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926042 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-push\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926186 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-system-configs\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926253 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-gm8bb\" (UniqueName: \"kubernetes.io/projected/45221e50-21d3-404c-9fd1-ee75abc5ff2f-kube-api-access-gm8bb\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926278 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926317 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-run\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926591 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-blob-cache\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926664 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildcachedir\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926714 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-proxy-ca-bundles\") pod \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\" (UID: \"45221e50-21d3-404c-9fd1-ee75abc5ff2f\") " Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.926820 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.927258 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.927604 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.927398 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.927394 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.927724 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.928393 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.929642 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.930245 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.934278 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.934310 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.934458 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/45221e50-21d3-404c-9fd1-ee75abc5ff2f-kube-api-access-gm8bb" (OuterVolumeSpecName: "kube-api-access-gm8bb") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "kube-api-access-gm8bb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:22:11 crc kubenswrapper[5117]: I0123 09:22:11.938811 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "45221e50-21d3-404c-9fd1-ee75abc5ff2f" (UID: "45221e50-21d3-404c-9fd1-ee75abc5ff2f"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029033 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029073 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029092 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-gm8bb\" (UniqueName: \"kubernetes.io/projected/45221e50-21d3-404c-9fd1-ee75abc5ff2f-kube-api-access-gm8bb\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029100 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029109 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029116 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029124 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029194 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/45221e50-21d3-404c-9fd1-ee75abc5ff2f-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029210 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/45221e50-21d3-404c-9fd1-ee75abc5ff2f-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.029219 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/45221e50-21d3-404c-9fd1-ee75abc5ff2f-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.612273 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-bundle-2-build" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.612285 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-bundle-2-build" event={"ID":"45221e50-21d3-404c-9fd1-ee75abc5ff2f","Type":"ContainerDied","Data":"a62b89eced4ca14222f26065f8f90da10ff949311f6db53a9701a92d3ea2bdf3"} Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.612332 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="a62b89eced4ca14222f26065f8f90da10ff949311f6db53a9701a92d3ea2bdf3" Jan 23 09:22:12 crc kubenswrapper[5117]: I0123 09:22:12.770963 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:22:12 crc kubenswrapper[5117]: E0123 09:22:12.771318 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.954802 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956249 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="manage-dockerfile" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956276 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="manage-dockerfile" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956306 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="git-clone" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956315 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="git-clone" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956336 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="4f0c8992-ec3a-49be-9dfd-2801a6083d7a" containerName="oc" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956346 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="4f0c8992-ec3a-49be-9dfd-2801a6083d7a" containerName="oc" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956362 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="docker-build" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956371 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="docker-build" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956547 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="4f0c8992-ec3a-49be-9dfd-2801a6083d7a" containerName="oc" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.956564 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="45221e50-21d3-404c-9fd1-ee75abc5ff2f" containerName="docker-build" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.968925 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.971377 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-sys-config\"" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.971361 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.971848 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-global-ca\"" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.972000 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-1-ca\"" Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.974561 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 23 09:22:15 crc kubenswrapper[5117]: I0123 09:22:15.980671 5117 scope.go:117] "RemoveContainer" containerID="a2372b059e44c2ce7e5d009c0c37aa834d2f759ce27c17032429911837fdae5b" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098431 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098490 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098542 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098581 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098601 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098656 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098680 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098717 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098736 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-82h9q\" (UniqueName: \"kubernetes.io/projected/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-kube-api-access-82h9q\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098755 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098814 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.098833 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.200492 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.200630 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.200735 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-node-pullsecrets\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201319 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-blob-cache\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201467 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201545 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201573 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201600 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-82h9q\" (UniqueName: \"kubernetes.io/projected/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-kube-api-access-82h9q\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201624 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201701 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.201866 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.202039 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildworkdir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.202648 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.203509 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-run\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.203669 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.203739 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.203815 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.204206 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildcachedir\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.204368 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-system-configs\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.205750 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-root\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.206282 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.207407 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.207755 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.227848 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-82h9q\" (UniqueName: \"kubernetes.io/projected/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-kube-api-access-82h9q\") pod \"smart-gateway-operator-bundle-1-build\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.291054 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.492242 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 23 09:22:16 crc kubenswrapper[5117]: I0123 09:22:16.644459 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"092d7ec1-5129-4d56-8f76-9bbc321bf5c9","Type":"ContainerStarted","Data":"d0119312c223b6392ca98ce690b28b1d4efc03c15021e972b3dddb6ad48e634c"} Jan 23 09:22:17 crc kubenswrapper[5117]: I0123 09:22:17.661547 5117 generic.go:358] "Generic (PLEG): container finished" podID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerID="c547d24974a079a9f3f2620f90b24be6b18da5c995a5b82569ed04efcea2d3d9" exitCode=0 Jan 23 09:22:17 crc kubenswrapper[5117]: I0123 09:22:17.661645 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"092d7ec1-5129-4d56-8f76-9bbc321bf5c9","Type":"ContainerDied","Data":"c547d24974a079a9f3f2620f90b24be6b18da5c995a5b82569ed04efcea2d3d9"} Jan 23 09:22:18 crc kubenswrapper[5117]: I0123 09:22:18.675592 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_092d7ec1-5129-4d56-8f76-9bbc321bf5c9/docker-build/0.log" Jan 23 09:22:18 crc kubenswrapper[5117]: I0123 09:22:18.676371 5117 generic.go:358] "Generic (PLEG): container finished" podID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerID="5a6b35628be6a3de9fb997aad564ec208577fa5d9e141fbbb54587149a2cf183" exitCode=1 Jan 23 09:22:18 crc kubenswrapper[5117]: I0123 09:22:18.676420 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"092d7ec1-5129-4d56-8f76-9bbc321bf5c9","Type":"ContainerDied","Data":"5a6b35628be6a3de9fb997aad564ec208577fa5d9e141fbbb54587149a2cf183"} Jan 23 09:22:19 crc kubenswrapper[5117]: I0123 09:22:19.905064 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_092d7ec1-5129-4d56-8f76-9bbc321bf5c9/docker-build/0.log" Jan 23 09:22:19 crc kubenswrapper[5117]: I0123 09:22:19.905882 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056506 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-blob-cache\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056584 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-root\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056649 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildcachedir\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056721 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-run\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056795 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-system-configs\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056832 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-82h9q\" (UniqueName: \"kubernetes.io/projected/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-kube-api-access-82h9q\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056861 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-push\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056927 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-node-pullsecrets\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056949 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-pull\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.056971 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildworkdir\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.057002 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-proxy-ca-bundles\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.057086 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-ca-bundles\") pod \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\" (UID: \"092d7ec1-5129-4d56-8f76-9bbc321bf5c9\") " Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.057966 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.059122 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.059211 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.059287 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.059405 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.059911 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.060000 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.060425 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.061037 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.064272 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.064530 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.064735 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-kube-api-access-82h9q" (OuterVolumeSpecName: "kube-api-access-82h9q") pod "092d7ec1-5129-4d56-8f76-9bbc321bf5c9" (UID: "092d7ec1-5129-4d56-8f76-9bbc321bf5c9"). InnerVolumeSpecName "kube-api-access-82h9q". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.159070 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.159783 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.159868 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.159943 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-82h9q\" (UniqueName: \"kubernetes.io/projected/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-kube-api-access-82h9q\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160016 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160091 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160197 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160278 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160351 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160428 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160508 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.160582 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/092d7ec1-5129-4d56-8f76-9bbc321bf5c9-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.695417 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_092d7ec1-5129-4d56-8f76-9bbc321bf5c9/docker-build/0.log" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.696303 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-1-build" event={"ID":"092d7ec1-5129-4d56-8f76-9bbc321bf5c9","Type":"ContainerDied","Data":"d0119312c223b6392ca98ce690b28b1d4efc03c15021e972b3dddb6ad48e634c"} Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.696350 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d0119312c223b6392ca98ce690b28b1d4efc03c15021e972b3dddb6ad48e634c" Jan 23 09:22:20 crc kubenswrapper[5117]: I0123 09:22:20.696372 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-1-build" Jan 23 09:22:25 crc kubenswrapper[5117]: I0123 09:22:25.770574 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:22:25 crc kubenswrapper[5117]: E0123 09:22:25.770872 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:22:26 crc kubenswrapper[5117]: I0123 09:22:26.466619 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 23 09:22:26 crc kubenswrapper[5117]: I0123 09:22:26.471349 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-1-build"] Jan 23 09:22:26 crc kubenswrapper[5117]: I0123 09:22:26.779710 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" path="/var/lib/kubelet/pods/092d7ec1-5129-4d56-8f76-9bbc321bf5c9/volumes" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.168439 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.169486 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerName="manage-dockerfile" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.169505 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerName="manage-dockerfile" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.169543 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerName="docker-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.169551 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerName="docker-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.169671 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="092d7ec1-5129-4d56-8f76-9bbc321bf5c9" containerName="docker-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.180032 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.183494 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-sys-config\"" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.184268 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-global-ca\"" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.184909 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-bundle-2-ca\"" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.186797 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.196058 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.269954 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270024 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270069 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270098 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270158 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270188 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nl98j\" (UniqueName: \"kubernetes.io/projected/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-kube-api-access-nl98j\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270314 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270345 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270412 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270466 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270493 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.270556 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374021 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374152 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374187 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374266 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374323 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374363 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374409 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374442 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374494 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374527 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nl98j\" (UniqueName: \"kubernetes.io/projected/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-kube-api-access-nl98j\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374587 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374619 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374721 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-root\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374729 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildcachedir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374867 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-node-pullsecrets\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.374986 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-blob-cache\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.375049 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-proxy-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.375168 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildworkdir\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.375179 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-run\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.375517 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-system-configs\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.375855 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-ca-bundles\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.379481 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-push\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.379975 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-pull\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.390377 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nl98j\" (UniqueName: \"kubernetes.io/projected/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-kube-api-access-nl98j\") pod \"smart-gateway-operator-bundle-2-build\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.494192 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.737917 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-bundle-2-build"] Jan 23 09:22:28 crc kubenswrapper[5117]: I0123 09:22:28.755379 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerStarted","Data":"57f76050fe7fc5e2958333644b2f86280cb784f61d436b1d1e7a95ae660f4c34"} Jan 23 09:22:29 crc kubenswrapper[5117]: I0123 09:22:29.763991 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerStarted","Data":"02e17f60539be0fd33c7b0d5af01255d1f2a57fa3ab5787f8feada2c384372b9"} Jan 23 09:22:30 crc kubenswrapper[5117]: I0123 09:22:30.773838 5117 generic.go:358] "Generic (PLEG): container finished" podID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerID="02e17f60539be0fd33c7b0d5af01255d1f2a57fa3ab5787f8feada2c384372b9" exitCode=0 Jan 23 09:22:30 crc kubenswrapper[5117]: I0123 09:22:30.789552 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerDied","Data":"02e17f60539be0fd33c7b0d5af01255d1f2a57fa3ab5787f8feada2c384372b9"} Jan 23 09:22:31 crc kubenswrapper[5117]: I0123 09:22:31.781750 5117 generic.go:358] "Generic (PLEG): container finished" podID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerID="f716672a50a69393af94d5c194cd893755da83fda49c77cdcecca441c412f24b" exitCode=0 Jan 23 09:22:31 crc kubenswrapper[5117]: I0123 09:22:31.781794 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerDied","Data":"f716672a50a69393af94d5c194cd893755da83fda49c77cdcecca441c412f24b"} Jan 23 09:22:31 crc kubenswrapper[5117]: I0123 09:22:31.818710 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-2-build_6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2/manage-dockerfile/0.log" Jan 23 09:22:32 crc kubenswrapper[5117]: I0123 09:22:32.789620 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerStarted","Data":"fc4bd91a2cc5901539598bf6cd78ab0c53a3faa4963345e0fa3938f085f1877a"} Jan 23 09:22:32 crc kubenswrapper[5117]: I0123 09:22:32.818062 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-bundle-2-build" podStartSLOduration=4.818039081 podStartE2EDuration="4.818039081s" podCreationTimestamp="2026-01-23 09:22:28 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:22:32.814876541 +0000 UTC m=+1764.571001577" watchObservedRunningTime="2026-01-23 09:22:32.818039081 +0000 UTC m=+1764.574164107" Jan 23 09:22:37 crc kubenswrapper[5117]: I0123 09:22:37.827567 5117 generic.go:358] "Generic (PLEG): container finished" podID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerID="fc4bd91a2cc5901539598bf6cd78ab0c53a3faa4963345e0fa3938f085f1877a" exitCode=0 Jan 23 09:22:37 crc kubenswrapper[5117]: I0123 09:22:37.827651 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerDied","Data":"fc4bd91a2cc5901539598bf6cd78ab0c53a3faa4963345e0fa3938f085f1877a"} Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.124171 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.161697 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-blob-cache\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.161773 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-pull\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.161850 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildcachedir\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.161893 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-ca-bundles\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.161926 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-root\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.161998 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-proxy-ca-bundles\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.162063 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildworkdir\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.162094 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-node-pullsecrets\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.162116 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-run\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.162160 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-system-configs\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.162211 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nl98j\" (UniqueName: \"kubernetes.io/projected/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-kube-api-access-nl98j\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.162235 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-push\") pod \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\" (UID: \"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2\") " Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.163606 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.164531 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.165048 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.165058 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.165691 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.165840 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.165985 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.166781 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.172329 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-kube-api-access-nl98j" (OuterVolumeSpecName: "kube-api-access-nl98j") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "kube-api-access-nl98j". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.172808 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.173993 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.177604 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" (UID: "6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263718 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263768 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263782 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263796 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263812 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263822 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263837 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263849 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nl98j\" (UniqueName: \"kubernetes.io/projected/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-kube-api-access-nl98j\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263862 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263873 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263885 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.263896 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.770935 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:22:39 crc kubenswrapper[5117]: E0123 09:22:39.771366 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.847899 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-bundle-2-build" event={"ID":"6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2","Type":"ContainerDied","Data":"57f76050fe7fc5e2958333644b2f86280cb784f61d436b1d1e7a95ae660f4c34"} Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.847946 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="57f76050fe7fc5e2958333644b2f86280cb784f61d436b1d1e7a95ae660f4c34" Jan 23 09:22:39 crc kubenswrapper[5117]: I0123 09:22:39.848031 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-bundle-2-build" Jan 23 09:22:53 crc kubenswrapper[5117]: I0123 09:22:53.771423 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:22:53 crc kubenswrapper[5117]: E0123 09:22:53.772238 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.456991 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458156 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="git-clone" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458174 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="git-clone" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458192 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="manage-dockerfile" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458199 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="manage-dockerfile" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458239 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="docker-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458247 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="docker-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.458374 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="6b64c5b3-00e9-45bb-ad4a-7bc2877eeaf2" containerName="docker-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.483851 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.484000 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.487732 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-dockercfg\"" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.487840 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-ca\"" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.487962 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-sys-config\"" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.488102 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"service-telemetry-framework-index-1-global-ca\"" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.488165 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"builder-dockercfg-48kpv\"" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626252 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626352 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626470 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626518 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626593 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626637 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626770 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626851 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626898 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.626936 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.627033 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.627119 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.627238 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6mtkf\" (UniqueName: \"kubernetes.io/projected/6157cd66-fffe-4d3c-96b5-8acb1c257edc-kube-api-access-6mtkf\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.728920 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729013 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729053 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729106 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729530 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729568 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6mtkf\" (UniqueName: \"kubernetes.io/projected/6157cd66-fffe-4d3c-96b5-8acb1c257edc-kube-api-access-6mtkf\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729613 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729647 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729815 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-node-pullsecrets\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729878 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729910 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.729961 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730002 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildworkdir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730053 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildcachedir\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730221 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730228 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-run\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730401 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-blob-cache\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730491 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.730870 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-root\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.731181 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-system-configs\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.731268 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-proxy-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.732078 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-ca-bundles\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.737674 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-push\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.739016 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-pull\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.739278 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.752404 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6mtkf\" (UniqueName: \"kubernetes.io/projected/6157cd66-fffe-4d3c-96b5-8acb1c257edc-kube-api-access-6mtkf\") pod \"service-telemetry-framework-index-1-build\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:57 crc kubenswrapper[5117]: I0123 09:22:57.800144 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:22:58 crc kubenswrapper[5117]: I0123 09:22:58.029798 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-framework-index-1-build"] Jan 23 09:22:59 crc kubenswrapper[5117]: I0123 09:22:59.009753 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerStarted","Data":"2f10a14fd59f341c6d7916e4504d7697e284fb8cb6ad66bb0565bbaf542301c6"} Jan 23 09:22:59 crc kubenswrapper[5117]: I0123 09:22:59.010228 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerStarted","Data":"34a72f68623c98a0b0c48f3231e951ceb936d584e9e6b319500ea7919eb9ac8b"} Jan 23 09:23:00 crc kubenswrapper[5117]: I0123 09:23:00.020979 5117 generic.go:358] "Generic (PLEG): container finished" podID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerID="2f10a14fd59f341c6d7916e4504d7697e284fb8cb6ad66bb0565bbaf542301c6" exitCode=0 Jan 23 09:23:00 crc kubenswrapper[5117]: I0123 09:23:00.021049 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerDied","Data":"2f10a14fd59f341c6d7916e4504d7697e284fb8cb6ad66bb0565bbaf542301c6"} Jan 23 09:23:01 crc kubenswrapper[5117]: I0123 09:23:01.028850 5117 generic.go:358] "Generic (PLEG): container finished" podID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerID="88beb8bf8063ef10e4c1a61313f573100f8f5b23ac3aa60b16ae757946c2c8b9" exitCode=0 Jan 23 09:23:01 crc kubenswrapper[5117]: I0123 09:23:01.028935 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerDied","Data":"88beb8bf8063ef10e4c1a61313f573100f8f5b23ac3aa60b16ae757946c2c8b9"} Jan 23 09:23:01 crc kubenswrapper[5117]: I0123 09:23:01.057542 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-framework-index-1-build_6157cd66-fffe-4d3c-96b5-8acb1c257edc/manage-dockerfile/0.log" Jan 23 09:23:02 crc kubenswrapper[5117]: I0123 09:23:02.043594 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerStarted","Data":"7c45264ebb53904d4214b68a1a8f70465e7987517f8bb023ba0eadbef87c1a52"} Jan 23 09:23:02 crc kubenswrapper[5117]: I0123 09:23:02.071597 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-framework-index-1-build" podStartSLOduration=5.071570678 podStartE2EDuration="5.071570678s" podCreationTimestamp="2026-01-23 09:22:57 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:23:02.071173156 +0000 UTC m=+1793.827298192" watchObservedRunningTime="2026-01-23 09:23:02.071570678 +0000 UTC m=+1793.827695714" Jan 23 09:23:08 crc kubenswrapper[5117]: I0123 09:23:08.776903 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:23:08 crc kubenswrapper[5117]: E0123 09:23:08.777618 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:23:15 crc kubenswrapper[5117]: I0123 09:23:15.499423 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_092d7ec1-5129-4d56-8f76-9bbc321bf5c9/docker-build/0.log" Jan 23 09:23:15 crc kubenswrapper[5117]: I0123 09:23:15.502602 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-bundle-1-build_092d7ec1-5129-4d56-8f76-9bbc321bf5c9/docker-build/0.log" Jan 23 09:23:15 crc kubenswrapper[5117]: I0123 09:23:15.562632 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:23:15 crc kubenswrapper[5117]: I0123 09:23:15.563580 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:23:15 crc kubenswrapper[5117]: I0123 09:23:15.571431 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:23:15 crc kubenswrapper[5117]: I0123 09:23:15.571737 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:23:22 crc kubenswrapper[5117]: I0123 09:23:22.770774 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:23:22 crc kubenswrapper[5117]: E0123 09:23:22.771706 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:23:34 crc kubenswrapper[5117]: I0123 09:23:34.770713 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:23:34 crc kubenswrapper[5117]: E0123 09:23:34.771774 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:23:41 crc kubenswrapper[5117]: I0123 09:23:41.330332 5117 generic.go:358] "Generic (PLEG): container finished" podID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerID="7c45264ebb53904d4214b68a1a8f70465e7987517f8bb023ba0eadbef87c1a52" exitCode=0 Jan 23 09:23:41 crc kubenswrapper[5117]: I0123 09:23:41.330696 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerDied","Data":"7c45264ebb53904d4214b68a1a8f70465e7987517f8bb023ba0eadbef87c1a52"} Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.609645 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758414 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6mtkf\" (UniqueName: \"kubernetes.io/projected/6157cd66-fffe-4d3c-96b5-8acb1c257edc-kube-api-access-6mtkf\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758527 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-blob-cache\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758670 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-system-configs\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758705 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildworkdir\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758757 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-run\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758871 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-root\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.758921 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-node-pullsecrets\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759001 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-service-telemetry-framework-index-dockercfg-user-build-volume\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759048 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-push\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759080 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-proxy-ca-bundles\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759159 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-pull\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759152 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-node-pullsecrets" (OuterVolumeSpecName: "node-pullsecrets") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "node-pullsecrets". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759214 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-ca-bundles\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.759654 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildcachedir\") pod \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\" (UID: \"6157cd66-fffe-4d3c-96b5-8acb1c257edc\") " Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760003 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildworkdir" (OuterVolumeSpecName: "buildworkdir") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "buildworkdir". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760053 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildcachedir" (OuterVolumeSpecName: "buildcachedir") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "buildcachedir". PluginName "kubernetes.io/host-path", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760247 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-run" (OuterVolumeSpecName: "container-storage-run") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "container-storage-run". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760643 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-run\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-run\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760667 5117 reconciler_common.go:299] "Volume detached for volume \"node-pullsecrets\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-node-pullsecrets\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760678 5117 reconciler_common.go:299] "Volume detached for volume \"buildcachedir\" (UniqueName: \"kubernetes.io/host-path/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildcachedir\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760690 5117 reconciler_common.go:299] "Volume detached for volume \"buildworkdir\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-buildworkdir\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760863 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-system-configs" (OuterVolumeSpecName: "build-system-configs") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "build-system-configs". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.760983 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-ca-bundles" (OuterVolumeSpecName: "build-ca-bundles") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "build-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.762349 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-proxy-ca-bundles" (OuterVolumeSpecName: "build-proxy-ca-bundles") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "build-proxy-ca-bundles". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.768387 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-push" (OuterVolumeSpecName: "builder-dockercfg-48kpv-push") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "builder-dockercfg-48kpv-push". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.770641 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-pull" (OuterVolumeSpecName: "builder-dockercfg-48kpv-pull") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "builder-dockercfg-48kpv-pull". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.776034 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/6157cd66-fffe-4d3c-96b5-8acb1c257edc-kube-api-access-6mtkf" (OuterVolumeSpecName: "kube-api-access-6mtkf") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "kube-api-access-6mtkf". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.778399 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-service-telemetry-framework-index-dockercfg-user-build-volume" (OuterVolumeSpecName: "service-telemetry-framework-index-dockercfg-user-build-volume") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "service-telemetry-framework-index-dockercfg-user-build-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863264 5117 reconciler_common.go:299] "Volume detached for volume \"build-system-configs\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-system-configs\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863310 5117 reconciler_common.go:299] "Volume detached for volume \"service-telemetry-framework-index-dockercfg-user-build-volume\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-service-telemetry-framework-index-dockercfg-user-build-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863323 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-push\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-push\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863334 5117 reconciler_common.go:299] "Volume detached for volume \"build-proxy-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-proxy-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863346 5117 reconciler_common.go:299] "Volume detached for volume \"builder-dockercfg-48kpv-pull\" (UniqueName: \"kubernetes.io/secret/6157cd66-fffe-4d3c-96b5-8acb1c257edc-builder-dockercfg-48kpv-pull\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863356 5117 reconciler_common.go:299] "Volume detached for volume \"build-ca-bundles\" (UniqueName: \"kubernetes.io/configmap/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-ca-bundles\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.863437 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6mtkf\" (UniqueName: \"kubernetes.io/projected/6157cd66-fffe-4d3c-96b5-8acb1c257edc-kube-api-access-6mtkf\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:42 crc kubenswrapper[5117]: I0123 09:23:42.992386 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-blob-cache" (OuterVolumeSpecName: "build-blob-cache") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "build-blob-cache". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:23:43 crc kubenswrapper[5117]: I0123 09:23:43.066191 5117 reconciler_common.go:299] "Volume detached for volume \"build-blob-cache\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-build-blob-cache\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:43 crc kubenswrapper[5117]: I0123 09:23:43.349210 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-framework-index-1-build" Jan 23 09:23:43 crc kubenswrapper[5117]: I0123 09:23:43.349223 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-framework-index-1-build" event={"ID":"6157cd66-fffe-4d3c-96b5-8acb1c257edc","Type":"ContainerDied","Data":"34a72f68623c98a0b0c48f3231e951ceb936d584e9e6b319500ea7919eb9ac8b"} Jan 23 09:23:43 crc kubenswrapper[5117]: I0123 09:23:43.349557 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="34a72f68623c98a0b0c48f3231e951ceb936d584e9e6b319500ea7919eb9ac8b" Jan 23 09:23:43 crc kubenswrapper[5117]: I0123 09:23:43.890996 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-root" (OuterVolumeSpecName: "container-storage-root") pod "6157cd66-fffe-4d3c-96b5-8acb1c257edc" (UID: "6157cd66-fffe-4d3c-96b5-8acb1c257edc"). InnerVolumeSpecName "container-storage-root". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:23:43 crc kubenswrapper[5117]: I0123 09:23:43.979484 5117 reconciler_common.go:299] "Volume detached for volume \"container-storage-root\" (UniqueName: \"kubernetes.io/empty-dir/6157cd66-fffe-4d3c-96b5-8acb1c257edc-container-storage-root\") on node \"crc\" DevicePath \"\"" Jan 23 09:23:45 crc kubenswrapper[5117]: I0123 09:23:45.771170 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:23:45 crc kubenswrapper[5117]: E0123 09:23:45.772283 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.462857 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-fnhn2"] Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463540 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="git-clone" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463560 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="git-clone" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463582 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="manage-dockerfile" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463588 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="manage-dockerfile" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463611 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="docker-build" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463618 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="docker-build" Jan 23 09:23:46 crc kubenswrapper[5117]: I0123 09:23:46.463726 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="6157cd66-fffe-4d3c-96b5-8acb1c257edc" containerName="docker-build" Jan 23 09:23:47 crc kubenswrapper[5117]: I0123 09:23:47.695338 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fnhn2"] Jan 23 09:23:47 crc kubenswrapper[5117]: I0123 09:23:47.695562 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:23:47 crc kubenswrapper[5117]: I0123 09:23:47.698384 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"infrawatch-operators-dockercfg-pswz5\"" Jan 23 09:23:47 crc kubenswrapper[5117]: I0123 09:23:47.745878 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-tkt65\" (UniqueName: \"kubernetes.io/projected/5f6b1dc0-b506-48db-b60a-92cfbe0e6f26-kube-api-access-tkt65\") pod \"infrawatch-operators-fnhn2\" (UID: \"5f6b1dc0-b506-48db-b60a-92cfbe0e6f26\") " pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:23:47 crc kubenswrapper[5117]: I0123 09:23:47.848431 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-tkt65\" (UniqueName: \"kubernetes.io/projected/5f6b1dc0-b506-48db-b60a-92cfbe0e6f26-kube-api-access-tkt65\") pod \"infrawatch-operators-fnhn2\" (UID: \"5f6b1dc0-b506-48db-b60a-92cfbe0e6f26\") " pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:23:47 crc kubenswrapper[5117]: I0123 09:23:47.869941 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-tkt65\" (UniqueName: \"kubernetes.io/projected/5f6b1dc0-b506-48db-b60a-92cfbe0e6f26-kube-api-access-tkt65\") pod \"infrawatch-operators-fnhn2\" (UID: \"5f6b1dc0-b506-48db-b60a-92cfbe0e6f26\") " pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:23:48 crc kubenswrapper[5117]: I0123 09:23:48.017932 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:23:48 crc kubenswrapper[5117]: I0123 09:23:48.462526 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-fnhn2"] Jan 23 09:23:49 crc kubenswrapper[5117]: I0123 09:23:49.394244 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-fnhn2" event={"ID":"5f6b1dc0-b506-48db-b60a-92cfbe0e6f26","Type":"ContainerStarted","Data":"2ea86779c0775bc284eddc985dfc24d2771f925da238be8cb2e452561d077cb6"} Jan 23 09:23:57 crc kubenswrapper[5117]: I0123 09:23:57.770893 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:23:57 crc kubenswrapper[5117]: E0123 09:23:57.771107 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:23:59 crc kubenswrapper[5117]: I0123 09:23:59.464923 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-fnhn2" event={"ID":"5f6b1dc0-b506-48db-b60a-92cfbe0e6f26","Type":"ContainerStarted","Data":"319d99928f76a33420e109af79c28f9d43801c87e8371eec045c170cde30e353"} Jan 23 09:23:59 crc kubenswrapper[5117]: I0123 09:23:59.485918 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-fnhn2" podStartSLOduration=2.960218748 podStartE2EDuration="13.485890086s" podCreationTimestamp="2026-01-23 09:23:46 +0000 UTC" firstStartedPulling="2026-01-23 09:23:48.472522422 +0000 UTC m=+1840.228647458" lastFinishedPulling="2026-01-23 09:23:58.99819376 +0000 UTC m=+1850.754318796" observedRunningTime="2026-01-23 09:23:59.47685254 +0000 UTC m=+1851.232977586" watchObservedRunningTime="2026-01-23 09:23:59.485890086 +0000 UTC m=+1851.242015132" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.149372 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486004-zx4tx"] Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.153824 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.156495 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.157459 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.157471 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.161353 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486004-zx4tx"] Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.245668 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-j672k\" (UniqueName: \"kubernetes.io/projected/d8028d1d-f059-45e1-bf54-3ce033e18492-kube-api-access-j672k\") pod \"auto-csr-approver-29486004-zx4tx\" (UID: \"d8028d1d-f059-45e1-bf54-3ce033e18492\") " pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.346676 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-j672k\" (UniqueName: \"kubernetes.io/projected/d8028d1d-f059-45e1-bf54-3ce033e18492-kube-api-access-j672k\") pod \"auto-csr-approver-29486004-zx4tx\" (UID: \"d8028d1d-f059-45e1-bf54-3ce033e18492\") " pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.371783 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-j672k\" (UniqueName: \"kubernetes.io/projected/d8028d1d-f059-45e1-bf54-3ce033e18492-kube-api-access-j672k\") pod \"auto-csr-approver-29486004-zx4tx\" (UID: \"d8028d1d-f059-45e1-bf54-3ce033e18492\") " pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.470926 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:00 crc kubenswrapper[5117]: I0123 09:24:00.664459 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486004-zx4tx"] Jan 23 09:24:01 crc kubenswrapper[5117]: I0123 09:24:01.479764 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" event={"ID":"d8028d1d-f059-45e1-bf54-3ce033e18492","Type":"ContainerStarted","Data":"0cbaaf52a0f00bae046b5582e1277bd13f85b6f9bf8d75acfa6d6e2211aa6cc8"} Jan 23 09:24:02 crc kubenswrapper[5117]: I0123 09:24:02.489092 5117 generic.go:358] "Generic (PLEG): container finished" podID="d8028d1d-f059-45e1-bf54-3ce033e18492" containerID="537c1d95d21e174b5d36e2f9e85af6bae72eb5c95b7fa606fd1d8e0581206aab" exitCode=0 Jan 23 09:24:02 crc kubenswrapper[5117]: I0123 09:24:02.489197 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" event={"ID":"d8028d1d-f059-45e1-bf54-3ce033e18492","Type":"ContainerDied","Data":"537c1d95d21e174b5d36e2f9e85af6bae72eb5c95b7fa606fd1d8e0581206aab"} Jan 23 09:24:03 crc kubenswrapper[5117]: I0123 09:24:03.728539 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:03 crc kubenswrapper[5117]: I0123 09:24:03.896684 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-j672k\" (UniqueName: \"kubernetes.io/projected/d8028d1d-f059-45e1-bf54-3ce033e18492-kube-api-access-j672k\") pod \"d8028d1d-f059-45e1-bf54-3ce033e18492\" (UID: \"d8028d1d-f059-45e1-bf54-3ce033e18492\") " Jan 23 09:24:03 crc kubenswrapper[5117]: I0123 09:24:03.901916 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d8028d1d-f059-45e1-bf54-3ce033e18492-kube-api-access-j672k" (OuterVolumeSpecName: "kube-api-access-j672k") pod "d8028d1d-f059-45e1-bf54-3ce033e18492" (UID: "d8028d1d-f059-45e1-bf54-3ce033e18492"). InnerVolumeSpecName "kube-api-access-j672k". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:24:03 crc kubenswrapper[5117]: I0123 09:24:03.998420 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-j672k\" (UniqueName: \"kubernetes.io/projected/d8028d1d-f059-45e1-bf54-3ce033e18492-kube-api-access-j672k\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:04 crc kubenswrapper[5117]: I0123 09:24:04.509866 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" Jan 23 09:24:04 crc kubenswrapper[5117]: I0123 09:24:04.510356 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486004-zx4tx" event={"ID":"d8028d1d-f059-45e1-bf54-3ce033e18492","Type":"ContainerDied","Data":"0cbaaf52a0f00bae046b5582e1277bd13f85b6f9bf8d75acfa6d6e2211aa6cc8"} Jan 23 09:24:04 crc kubenswrapper[5117]: I0123 09:24:04.510420 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0cbaaf52a0f00bae046b5582e1277bd13f85b6f9bf8d75acfa6d6e2211aa6cc8" Jan 23 09:24:04 crc kubenswrapper[5117]: I0123 09:24:04.786715 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29485998-58xvp"] Jan 23 09:24:04 crc kubenswrapper[5117]: I0123 09:24:04.791867 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29485998-58xvp"] Jan 23 09:24:06 crc kubenswrapper[5117]: I0123 09:24:06.803173 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="15395d2f-4140-4df5-a478-df783b84ba78" path="/var/lib/kubelet/pods/15395d2f-4140-4df5-a478-df783b84ba78/volumes" Jan 23 09:24:08 crc kubenswrapper[5117]: I0123 09:24:08.018577 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:24:08 crc kubenswrapper[5117]: I0123 09:24:08.019121 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:24:08 crc kubenswrapper[5117]: I0123 09:24:08.055699 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:24:08 crc kubenswrapper[5117]: I0123 09:24:08.572860 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-fnhn2" Jan 23 09:24:10 crc kubenswrapper[5117]: I0123 09:24:10.771403 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:24:10 crc kubenswrapper[5117]: E0123 09:24:10.772158 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:24:16 crc kubenswrapper[5117]: I0123 09:24:16.768810 5117 scope.go:117] "RemoveContainer" containerID="3b4f2aaf37fe3348978504ad4f99d6609007e7e0b2c95671efb15e8b72df34ba" Jan 23 09:24:25 crc kubenswrapper[5117]: I0123 09:24:25.772587 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:24:25 crc kubenswrapper[5117]: E0123 09:24:25.773862 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:24:34 crc kubenswrapper[5117]: I0123 09:24:34.072280 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh"] Jan 23 09:24:34 crc kubenswrapper[5117]: I0123 09:24:34.073906 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d8028d1d-f059-45e1-bf54-3ce033e18492" containerName="oc" Jan 23 09:24:34 crc kubenswrapper[5117]: I0123 09:24:34.073924 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d8028d1d-f059-45e1-bf54-3ce033e18492" containerName="oc" Jan 23 09:24:34 crc kubenswrapper[5117]: I0123 09:24:34.074095 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d8028d1d-f059-45e1-bf54-3ce033e18492" containerName="oc" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.236542 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.246487 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh"] Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.246525 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b"] Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.248227 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:24:40 crc kubenswrapper[5117]: E0123 09:24:40.248404 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.253649 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b"] Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.253890 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.301044 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.301480 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.302257 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.302536 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-t7cv2\" (UniqueName: \"kubernetes.io/projected/79099d51-18d0-41da-af30-85ab620ddbd3-kube-api-access-t7cv2\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.302693 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-9ssg8\" (UniqueName: \"kubernetes.io/projected/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-kube-api-access-9ssg8\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.302779 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.404526 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.404591 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-t7cv2\" (UniqueName: \"kubernetes.io/projected/79099d51-18d0-41da-af30-85ab620ddbd3-kube-api-access-t7cv2\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.404618 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-9ssg8\" (UniqueName: \"kubernetes.io/projected/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-kube-api-access-9ssg8\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.404642 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.404671 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.404714 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.405474 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-util\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.405717 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-bundle\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.405730 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-util\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.406286 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-bundle\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.424027 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-t7cv2\" (UniqueName: \"kubernetes.io/projected/79099d51-18d0-41da-af30-85ab620ddbd3-kube-api-access-t7cv2\") pod \"500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.424043 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-9ssg8\" (UniqueName: \"kubernetes.io/projected/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-kube-api-access-9ssg8\") pod \"372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.558317 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.573083 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.821818 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh"] Jan 23 09:24:40 crc kubenswrapper[5117]: I0123 09:24:40.978248 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b"] Jan 23 09:24:40 crc kubenswrapper[5117]: W0123 09:24:40.991950 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f1a4ebb_a17e_41e7_b77a_db8c2669d625.slice/crio-bce729bc164c794aa9cc1cdf2c19bf7ef368aae0430000e540d40707a67fa6c4 WatchSource:0}: Error finding container bce729bc164c794aa9cc1cdf2c19bf7ef368aae0430000e540d40707a67fa6c4: Status 404 returned error can't find the container with id bce729bc164c794aa9cc1cdf2c19bf7ef368aae0430000e540d40707a67fa6c4 Jan 23 09:24:41 crc kubenswrapper[5117]: I0123 09:24:41.778534 5117 generic.go:358] "Generic (PLEG): container finished" podID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerID="50c3612cc18220db02f42e6437f2521740d4561e2d46e787d20379dda0c2b3a7" exitCode=0 Jan 23 09:24:41 crc kubenswrapper[5117]: I0123 09:24:41.778590 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" event={"ID":"3f1a4ebb-a17e-41e7-b77a-db8c2669d625","Type":"ContainerDied","Data":"50c3612cc18220db02f42e6437f2521740d4561e2d46e787d20379dda0c2b3a7"} Jan 23 09:24:41 crc kubenswrapper[5117]: I0123 09:24:41.778956 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" event={"ID":"3f1a4ebb-a17e-41e7-b77a-db8c2669d625","Type":"ContainerStarted","Data":"bce729bc164c794aa9cc1cdf2c19bf7ef368aae0430000e540d40707a67fa6c4"} Jan 23 09:24:41 crc kubenswrapper[5117]: I0123 09:24:41.785018 5117 generic.go:358] "Generic (PLEG): container finished" podID="79099d51-18d0-41da-af30-85ab620ddbd3" containerID="93a46911d365bfc3bd07cf2c0214c1bd7aac062b0aa1690a6c5c8caa88498ff4" exitCode=0 Jan 23 09:24:41 crc kubenswrapper[5117]: I0123 09:24:41.785135 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" event={"ID":"79099d51-18d0-41da-af30-85ab620ddbd3","Type":"ContainerDied","Data":"93a46911d365bfc3bd07cf2c0214c1bd7aac062b0aa1690a6c5c8caa88498ff4"} Jan 23 09:24:41 crc kubenswrapper[5117]: I0123 09:24:41.785178 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" event={"ID":"79099d51-18d0-41da-af30-85ab620ddbd3","Type":"ContainerStarted","Data":"41bcfd65b934ca3601584d12629426201a37c63e7a6b94c84e7ecf8c3b10719c"} Jan 23 09:24:42 crc kubenswrapper[5117]: I0123 09:24:42.797576 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" event={"ID":"3f1a4ebb-a17e-41e7-b77a-db8c2669d625","Type":"ContainerStarted","Data":"0c9753e7b61246793f99cc583c26c254a0472411c442ee8d74de22288ce818f8"} Jan 23 09:24:42 crc kubenswrapper[5117]: I0123 09:24:42.800508 5117 generic.go:358] "Generic (PLEG): container finished" podID="79099d51-18d0-41da-af30-85ab620ddbd3" containerID="1cc2ea765b0370319e1748806444d24e58a67981b8c9462884717599899216d4" exitCode=0 Jan 23 09:24:42 crc kubenswrapper[5117]: I0123 09:24:42.800594 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" event={"ID":"79099d51-18d0-41da-af30-85ab620ddbd3","Type":"ContainerDied","Data":"1cc2ea765b0370319e1748806444d24e58a67981b8c9462884717599899216d4"} Jan 23 09:24:43 crc kubenswrapper[5117]: I0123 09:24:43.810098 5117 generic.go:358] "Generic (PLEG): container finished" podID="79099d51-18d0-41da-af30-85ab620ddbd3" containerID="4ff7117754f2e83519e3ee0c85c81119dca69239da645a9d41d897ed15321cd8" exitCode=0 Jan 23 09:24:43 crc kubenswrapper[5117]: I0123 09:24:43.810183 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" event={"ID":"79099d51-18d0-41da-af30-85ab620ddbd3","Type":"ContainerDied","Data":"4ff7117754f2e83519e3ee0c85c81119dca69239da645a9d41d897ed15321cd8"} Jan 23 09:24:43 crc kubenswrapper[5117]: I0123 09:24:43.813318 5117 generic.go:358] "Generic (PLEG): container finished" podID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerID="0c9753e7b61246793f99cc583c26c254a0472411c442ee8d74de22288ce818f8" exitCode=0 Jan 23 09:24:43 crc kubenswrapper[5117]: I0123 09:24:43.813402 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" event={"ID":"3f1a4ebb-a17e-41e7-b77a-db8c2669d625","Type":"ContainerDied","Data":"0c9753e7b61246793f99cc583c26c254a0472411c442ee8d74de22288ce818f8"} Jan 23 09:24:44 crc kubenswrapper[5117]: I0123 09:24:44.822904 5117 generic.go:358] "Generic (PLEG): container finished" podID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerID="a27ba2830bab49e38719ac54f0a0cf0a41a00aab2d28d8b833877951eb77d2ef" exitCode=0 Jan 23 09:24:44 crc kubenswrapper[5117]: I0123 09:24:44.822975 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" event={"ID":"3f1a4ebb-a17e-41e7-b77a-db8c2669d625","Type":"ContainerDied","Data":"a27ba2830bab49e38719ac54f0a0cf0a41a00aab2d28d8b833877951eb77d2ef"} Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.029867 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.178484 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-bundle\") pod \"79099d51-18d0-41da-af30-85ab620ddbd3\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.178549 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-util\") pod \"79099d51-18d0-41da-af30-85ab620ddbd3\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.178704 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-t7cv2\" (UniqueName: \"kubernetes.io/projected/79099d51-18d0-41da-af30-85ab620ddbd3-kube-api-access-t7cv2\") pod \"79099d51-18d0-41da-af30-85ab620ddbd3\" (UID: \"79099d51-18d0-41da-af30-85ab620ddbd3\") " Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.179052 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-bundle" (OuterVolumeSpecName: "bundle") pod "79099d51-18d0-41da-af30-85ab620ddbd3" (UID: "79099d51-18d0-41da-af30-85ab620ddbd3"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.187600 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/79099d51-18d0-41da-af30-85ab620ddbd3-kube-api-access-t7cv2" (OuterVolumeSpecName: "kube-api-access-t7cv2") pod "79099d51-18d0-41da-af30-85ab620ddbd3" (UID: "79099d51-18d0-41da-af30-85ab620ddbd3"). InnerVolumeSpecName "kube-api-access-t7cv2". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.192080 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-util" (OuterVolumeSpecName: "util") pod "79099d51-18d0-41da-af30-85ab620ddbd3" (UID: "79099d51-18d0-41da-af30-85ab620ddbd3"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.280332 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-t7cv2\" (UniqueName: \"kubernetes.io/projected/79099d51-18d0-41da-af30-85ab620ddbd3-kube-api-access-t7cv2\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.280377 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.280392 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/79099d51-18d0-41da-af30-85ab620ddbd3-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.835759 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" event={"ID":"79099d51-18d0-41da-af30-85ab620ddbd3","Type":"ContainerDied","Data":"41bcfd65b934ca3601584d12629426201a37c63e7a6b94c84e7ecf8c3b10719c"} Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.835813 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="41bcfd65b934ca3601584d12629426201a37c63e7a6b94c84e7ecf8c3b10719c" Jan 23 09:24:45 crc kubenswrapper[5117]: I0123 09:24:45.836042 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/500c4f010310dad14c569d8fa2124fef1cf701af50ed1128cec4daf65a2nckh" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.046304 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.192164 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-util\") pod \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.192203 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-bundle\") pod \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.192291 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-9ssg8\" (UniqueName: \"kubernetes.io/projected/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-kube-api-access-9ssg8\") pod \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\" (UID: \"3f1a4ebb-a17e-41e7-b77a-db8c2669d625\") " Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.193193 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-bundle" (OuterVolumeSpecName: "bundle") pod "3f1a4ebb-a17e-41e7-b77a-db8c2669d625" (UID: "3f1a4ebb-a17e-41e7-b77a-db8c2669d625"). InnerVolumeSpecName "bundle". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.199602 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-kube-api-access-9ssg8" (OuterVolumeSpecName: "kube-api-access-9ssg8") pod "3f1a4ebb-a17e-41e7-b77a-db8c2669d625" (UID: "3f1a4ebb-a17e-41e7-b77a-db8c2669d625"). InnerVolumeSpecName "kube-api-access-9ssg8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.204286 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-util" (OuterVolumeSpecName: "util") pod "3f1a4ebb-a17e-41e7-b77a-db8c2669d625" (UID: "3f1a4ebb-a17e-41e7-b77a-db8c2669d625"). InnerVolumeSpecName "util". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.294002 5117 reconciler_common.go:299] "Volume detached for volume \"util\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-util\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.294040 5117 reconciler_common.go:299] "Volume detached for volume \"bundle\" (UniqueName: \"kubernetes.io/empty-dir/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-bundle\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.294055 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-9ssg8\" (UniqueName: \"kubernetes.io/projected/3f1a4ebb-a17e-41e7-b77a-db8c2669d625-kube-api-access-9ssg8\") on node \"crc\" DevicePath \"\"" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.857663 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" event={"ID":"3f1a4ebb-a17e-41e7-b77a-db8c2669d625","Type":"ContainerDied","Data":"bce729bc164c794aa9cc1cdf2c19bf7ef368aae0430000e540d40707a67fa6c4"} Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.857711 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="bce729bc164c794aa9cc1cdf2c19bf7ef368aae0430000e540d40707a67fa6c4" Jan 23 09:24:46 crc kubenswrapper[5117]: I0123 09:24:46.857823 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/372e7d5daac88c2e9a91443a2f508c8c20ad57bc41b1606ec960d61c09vcr8b" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.704587 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg"] Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.705971 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="util" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.705991 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="util" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706000 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="pull" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706006 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="pull" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706039 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="util" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706046 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="util" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706064 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="pull" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706071 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="pull" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706095 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="extract" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706101 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="extract" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706112 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="extract" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706119 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="extract" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706262 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="79099d51-18d0-41da-af30-85ab620ddbd3" containerName="extract" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.706275 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f1a4ebb-a17e-41e7-b77a-db8c2669d625" containerName="extract" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.711288 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.715253 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"service-telemetry-operator-dockercfg-kmcfx\"" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.731947 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg"] Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.786239 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-pds8t\" (UniqueName: \"kubernetes.io/projected/b89a3b39-385f-45b6-969f-302f4368f39c-kube-api-access-pds8t\") pod \"service-telemetry-operator-6b94fdf6bd-pbrqg\" (UID: \"b89a3b39-385f-45b6-969f-302f4368f39c\") " pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.786312 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b89a3b39-385f-45b6-969f-302f4368f39c-runner\") pod \"service-telemetry-operator-6b94fdf6bd-pbrqg\" (UID: \"b89a3b39-385f-45b6-969f-302f4368f39c\") " pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.887818 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-pds8t\" (UniqueName: \"kubernetes.io/projected/b89a3b39-385f-45b6-969f-302f4368f39c-kube-api-access-pds8t\") pod \"service-telemetry-operator-6b94fdf6bd-pbrqg\" (UID: \"b89a3b39-385f-45b6-969f-302f4368f39c\") " pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.887960 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b89a3b39-385f-45b6-969f-302f4368f39c-runner\") pod \"service-telemetry-operator-6b94fdf6bd-pbrqg\" (UID: \"b89a3b39-385f-45b6-969f-302f4368f39c\") " pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.888692 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/b89a3b39-385f-45b6-969f-302f4368f39c-runner\") pod \"service-telemetry-operator-6b94fdf6bd-pbrqg\" (UID: \"b89a3b39-385f-45b6-969f-302f4368f39c\") " pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:52 crc kubenswrapper[5117]: I0123 09:24:52.919128 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-pds8t\" (UniqueName: \"kubernetes.io/projected/b89a3b39-385f-45b6-969f-302f4368f39c-kube-api-access-pds8t\") pod \"service-telemetry-operator-6b94fdf6bd-pbrqg\" (UID: \"b89a3b39-385f-45b6-969f-302f4368f39c\") " pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.033626 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.238071 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg"] Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.770678 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:24:53 crc kubenswrapper[5117]: E0123 09:24:53.770932 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.901407 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw"] Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.906617 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.911023 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-operator-dockercfg-gwpfg\"" Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.915155 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw"] Jan 23 09:24:53 crc kubenswrapper[5117]: I0123 09:24:53.926020 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" event={"ID":"b89a3b39-385f-45b6-969f-302f4368f39c","Type":"ContainerStarted","Data":"2ed500856b7d36f952a32f0bed9df94c7f066335dcf5010cbfd13cf5bc60e0c2"} Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.005083 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qnm8s\" (UniqueName: \"kubernetes.io/projected/8f53b642-43da-4d64-a951-124d3f32f87c-kube-api-access-qnm8s\") pod \"smart-gateway-operator-7db5cb89d4-jkccw\" (UID: \"8f53b642-43da-4d64-a951-124d3f32f87c\") " pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.005170 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f53b642-43da-4d64-a951-124d3f32f87c-runner\") pod \"smart-gateway-operator-7db5cb89d4-jkccw\" (UID: \"8f53b642-43da-4d64-a951-124d3f32f87c\") " pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.106780 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qnm8s\" (UniqueName: \"kubernetes.io/projected/8f53b642-43da-4d64-a951-124d3f32f87c-kube-api-access-qnm8s\") pod \"smart-gateway-operator-7db5cb89d4-jkccw\" (UID: \"8f53b642-43da-4d64-a951-124d3f32f87c\") " pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.107027 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f53b642-43da-4d64-a951-124d3f32f87c-runner\") pod \"smart-gateway-operator-7db5cb89d4-jkccw\" (UID: \"8f53b642-43da-4d64-a951-124d3f32f87c\") " pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.107601 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"runner\" (UniqueName: \"kubernetes.io/empty-dir/8f53b642-43da-4d64-a951-124d3f32f87c-runner\") pod \"smart-gateway-operator-7db5cb89d4-jkccw\" (UID: \"8f53b642-43da-4d64-a951-124d3f32f87c\") " pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.125934 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qnm8s\" (UniqueName: \"kubernetes.io/projected/8f53b642-43da-4d64-a951-124d3f32f87c-kube-api-access-qnm8s\") pod \"smart-gateway-operator-7db5cb89d4-jkccw\" (UID: \"8f53b642-43da-4d64-a951-124d3f32f87c\") " pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.227818 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.739790 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw"] Jan 23 09:24:54 crc kubenswrapper[5117]: I0123 09:24:54.945068 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" event={"ID":"8f53b642-43da-4d64-a951-124d3f32f87c","Type":"ContainerStarted","Data":"cc1f819fd7e52cd8275727d3e9b4d27213b2c122d3d65ff11979b2aa37d4fdf4"} Jan 23 09:25:06 crc kubenswrapper[5117]: I0123 09:25:06.770168 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:25:06 crc kubenswrapper[5117]: E0123 09:25:06.770904 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:25:20 crc kubenswrapper[5117]: I0123 09:25:20.771183 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:25:24 crc kubenswrapper[5117]: I0123 09:25:24.190590 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"994fa97d1cb60133ddd28a5a7c053d2a40f4fd74acc6d90fde40e86efd34b82f"} Jan 23 09:25:24 crc kubenswrapper[5117]: I0123 09:25:24.192164 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" event={"ID":"b89a3b39-385f-45b6-969f-302f4368f39c","Type":"ContainerStarted","Data":"cddd19f36dbc6011cea9b7d7f320144c65049eb5faa6fffbc114ce5162931ea0"} Jan 23 09:25:24 crc kubenswrapper[5117]: I0123 09:25:24.193442 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" event={"ID":"8f53b642-43da-4d64-a951-124d3f32f87c","Type":"ContainerStarted","Data":"6ae08b5128d619d9d8bf08278655cb120f630b9e3058ace5b37895ff84c77e80"} Jan 23 09:25:24 crc kubenswrapper[5117]: I0123 09:25:24.263070 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/service-telemetry-operator-6b94fdf6bd-pbrqg" podStartSLOduration=1.8339677060000001 podStartE2EDuration="32.263043322s" podCreationTimestamp="2026-01-23 09:24:52 +0000 UTC" firstStartedPulling="2026-01-23 09:24:53.249294712 +0000 UTC m=+1905.005419738" lastFinishedPulling="2026-01-23 09:25:23.678370328 +0000 UTC m=+1935.434495354" observedRunningTime="2026-01-23 09:25:24.231693824 +0000 UTC m=+1935.987818850" watchObservedRunningTime="2026-01-23 09:25:24.263043322 +0000 UTC m=+1936.019168358" Jan 23 09:25:24 crc kubenswrapper[5117]: I0123 09:25:24.274190 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/smart-gateway-operator-7db5cb89d4-jkccw" podStartSLOduration=2.379285875 podStartE2EDuration="31.274165837s" podCreationTimestamp="2026-01-23 09:24:53 +0000 UTC" firstStartedPulling="2026-01-23 09:24:54.752826077 +0000 UTC m=+1906.508951103" lastFinishedPulling="2026-01-23 09:25:23.647706039 +0000 UTC m=+1935.403831065" observedRunningTime="2026-01-23 09:25:24.258860043 +0000 UTC m=+1936.014985069" watchObservedRunningTime="2026-01-23 09:25:24.274165837 +0000 UTC m=+1936.030290863" Jan 23 09:25:47 crc kubenswrapper[5117]: I0123 09:25:47.729762 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-g2mnr"] Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.933093 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.936331 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-interconnect-sasl-config\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.936378 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-credentials\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.936749 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-ca\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.943726 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-openstack-ca\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.944070 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-inter-router-credentials\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.944278 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-dockercfg-fm2vj\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.944869 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-users\"" Jan 23 09:25:48 crc kubenswrapper[5117]: I0123 09:25:48.946547 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-g2mnr"] Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.072323 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.072607 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.072682 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-users\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.072826 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.072859 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.073649 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lfl7d\" (UniqueName: \"kubernetes.io/projected/f5adda23-da5c-4bf9-9074-7a4deaab679e-kube-api-access-lfl7d\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.073918 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-config\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175427 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-config\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175476 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175503 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175521 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-users\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175549 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175570 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.175612 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lfl7d\" (UniqueName: \"kubernetes.io/projected/f5adda23-da5c-4bf9-9074-7a4deaab679e-kube-api-access-lfl7d\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.176224 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-config\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.181810 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-users\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.182322 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.191415 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.192759 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.192772 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.200504 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lfl7d\" (UniqueName: \"kubernetes.io/projected/f5adda23-da5c-4bf9-9074-7a4deaab679e-kube-api-access-lfl7d\") pod \"default-interconnect-55bf8d5cb-g2mnr\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.253066 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:25:49 crc kubenswrapper[5117]: I0123 09:25:49.442086 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-g2mnr"] Jan 23 09:25:50 crc kubenswrapper[5117]: I0123 09:25:50.396566 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" event={"ID":"f5adda23-da5c-4bf9-9074-7a4deaab679e","Type":"ContainerStarted","Data":"4141468fb2bfc2a79462ad85a840a198bb8298b09a343c704878bf1c4d925afb"} Jan 23 09:25:54 crc kubenswrapper[5117]: I0123 09:25:54.428525 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" event={"ID":"f5adda23-da5c-4bf9-9074-7a4deaab679e","Type":"ContainerStarted","Data":"c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c"} Jan 23 09:25:54 crc kubenswrapper[5117]: I0123 09:25:54.452977 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" podStartSLOduration=2.630671995 podStartE2EDuration="7.452954578s" podCreationTimestamp="2026-01-23 09:25:47 +0000 UTC" firstStartedPulling="2026-01-23 09:25:49.4462511 +0000 UTC m=+1961.202376126" lastFinishedPulling="2026-01-23 09:25:54.268533683 +0000 UTC m=+1966.024658709" observedRunningTime="2026-01-23 09:25:54.447584636 +0000 UTC m=+1966.203709652" watchObservedRunningTime="2026-01-23 09:25:54.452954578 +0000 UTC m=+1966.209079604" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.237471 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.244399 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.247019 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-0\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.247034 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-stf-dockercfg-fzl9g\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.247449 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-prometheus-proxy-tls\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.247540 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.247465 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-web-config\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.247676 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-2\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.248615 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-session-secret\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.249215 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"prometheus-default-rulefiles-1\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.251573 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"serving-certs-ca-bundle\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.252854 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"prometheus-default-tls-assets-0\"" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.257375 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.412719 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6853448b-d202-4d14-ba3b-20f05356a3c4-tls-assets\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.412789 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.412823 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413101 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413525 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413617 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-config\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413651 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jpn9v\" (UniqueName: \"kubernetes.io/projected/6853448b-d202-4d14-ba3b-20f05356a3c4-kube-api-access-jpn9v\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413681 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-web-config\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413742 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413872 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413902 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.413948 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6853448b-d202-4d14-ba3b-20f05356a3c4-config-out\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517028 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517102 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517160 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-config\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517191 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jpn9v\" (UniqueName: \"kubernetes.io/projected/6853448b-d202-4d14-ba3b-20f05356a3c4-kube-api-access-jpn9v\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517657 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-web-config\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517711 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517778 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517804 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517842 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6853448b-d202-4d14-ba3b-20f05356a3c4-config-out\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517886 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6853448b-d202-4d14-ba3b-20f05356a3c4-tls-assets\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517920 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.517947 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: E0123 09:25:59.518272 5117 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 23 09:25:59 crc kubenswrapper[5117]: E0123 09:25:59.518358 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls podName:6853448b-d202-4d14-ba3b-20f05356a3c4 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:00.018335319 +0000 UTC m=+1971.774460345 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "6853448b-d202-4d14-ba3b-20f05356a3c4") : secret "default-prometheus-proxy-tls" not found Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.518908 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-0\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-0\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.518935 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"configmap-serving-certs-ca-bundle\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-configmap-serving-certs-ca-bundle\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.519939 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-1\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-1\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.520811 5117 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.520844 5117 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/18c892ee5f5d9feeeef9cdf5c33cc58cec150def2e81e8c9e8558beddb040020/globalmount\"" pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.524928 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-web-config\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.527451 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-config\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.528186 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"prometheus-default-rulefiles-2\" (UniqueName: \"kubernetes.io/configmap/6853448b-d202-4d14-ba3b-20f05356a3c4-prometheus-default-rulefiles-2\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.530820 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-session-secret\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.530821 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/6853448b-d202-4d14-ba3b-20f05356a3c4-config-out\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.531449 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/6853448b-d202-4d14-ba3b-20f05356a3c4-tls-assets\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.543715 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jpn9v\" (UniqueName: \"kubernetes.io/projected/6853448b-d202-4d14-ba3b-20f05356a3c4-kube-api-access-jpn9v\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:25:59 crc kubenswrapper[5117]: I0123 09:25:59.557651 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-1e09ae1e-fb51-4152-b426-e2f93bc6e330\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.026067 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:26:00 crc kubenswrapper[5117]: E0123 09:26:00.026222 5117 secret.go:189] Couldn't get secret service-telemetry/default-prometheus-proxy-tls: secret "default-prometheus-proxy-tls" not found Jan 23 09:26:00 crc kubenswrapper[5117]: E0123 09:26:00.026350 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls podName:6853448b-d202-4d14-ba3b-20f05356a3c4 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:01.026283189 +0000 UTC m=+1972.782408215 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-prometheus-proxy-tls" (UniqueName: "kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls") pod "prometheus-default-0" (UID: "6853448b-d202-4d14-ba3b-20f05356a3c4") : secret "default-prometheus-proxy-tls" not found Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.126720 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486006-mqgks"] Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.133423 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486006-mqgks"] Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.133586 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.135847 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.136086 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.136332 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.229905 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xpht8\" (UniqueName: \"kubernetes.io/projected/d28b8a9f-3568-47eb-9cb8-4b958070464b-kube-api-access-xpht8\") pod \"auto-csr-approver-29486006-mqgks\" (UID: \"d28b8a9f-3568-47eb-9cb8-4b958070464b\") " pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.331618 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xpht8\" (UniqueName: \"kubernetes.io/projected/d28b8a9f-3568-47eb-9cb8-4b958070464b-kube-api-access-xpht8\") pod \"auto-csr-approver-29486006-mqgks\" (UID: \"d28b8a9f-3568-47eb-9cb8-4b958070464b\") " pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.351106 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xpht8\" (UniqueName: \"kubernetes.io/projected/d28b8a9f-3568-47eb-9cb8-4b958070464b-kube-api-access-xpht8\") pod \"auto-csr-approver-29486006-mqgks\" (UID: \"d28b8a9f-3568-47eb-9cb8-4b958070464b\") " pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.453059 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:00 crc kubenswrapper[5117]: I0123 09:26:00.651012 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486006-mqgks"] Jan 23 09:26:01 crc kubenswrapper[5117]: I0123 09:26:01.049857 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:26:01 crc kubenswrapper[5117]: I0123 09:26:01.057804 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-prometheus-proxy-tls\" (UniqueName: \"kubernetes.io/secret/6853448b-d202-4d14-ba3b-20f05356a3c4-secret-default-prometheus-proxy-tls\") pod \"prometheus-default-0\" (UID: \"6853448b-d202-4d14-ba3b-20f05356a3c4\") " pod="service-telemetry/prometheus-default-0" Jan 23 09:26:01 crc kubenswrapper[5117]: I0123 09:26:01.069390 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/prometheus-default-0" Jan 23 09:26:01 crc kubenswrapper[5117]: I0123 09:26:01.303218 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/prometheus-default-0"] Jan 23 09:26:01 crc kubenswrapper[5117]: W0123 09:26:01.313642 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod6853448b_d202_4d14_ba3b_20f05356a3c4.slice/crio-30f395a0977ea44fd700bb844627dd5b71460966ef7fe6d2f9af95300846acaf WatchSource:0}: Error finding container 30f395a0977ea44fd700bb844627dd5b71460966ef7fe6d2f9af95300846acaf: Status 404 returned error can't find the container with id 30f395a0977ea44fd700bb844627dd5b71460966ef7fe6d2f9af95300846acaf Jan 23 09:26:01 crc kubenswrapper[5117]: I0123 09:26:01.487348 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486006-mqgks" event={"ID":"d28b8a9f-3568-47eb-9cb8-4b958070464b","Type":"ContainerStarted","Data":"c31bf06ca15b22c8910d9db1d325e176dae82a1c44eb662be173498605698837"} Jan 23 09:26:01 crc kubenswrapper[5117]: I0123 09:26:01.488744 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"6853448b-d202-4d14-ba3b-20f05356a3c4","Type":"ContainerStarted","Data":"30f395a0977ea44fd700bb844627dd5b71460966ef7fe6d2f9af95300846acaf"} Jan 23 09:26:03 crc kubenswrapper[5117]: I0123 09:26:03.508742 5117 generic.go:358] "Generic (PLEG): container finished" podID="d28b8a9f-3568-47eb-9cb8-4b958070464b" containerID="de23e6420da0a0ec66f7ad880fb305728b32f10df797214aa643585ff1df5522" exitCode=0 Jan 23 09:26:03 crc kubenswrapper[5117]: I0123 09:26:03.508808 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486006-mqgks" event={"ID":"d28b8a9f-3568-47eb-9cb8-4b958070464b","Type":"ContainerDied","Data":"de23e6420da0a0ec66f7ad880fb305728b32f10df797214aa643585ff1df5522"} Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.170586 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-jgfx4"] Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.178537 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.182731 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgfx4"] Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.294781 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-catalog-content\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.294855 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-utilities\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.294893 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/9849141a-d6d5-4378-acef-e2ace2a25e2e-kube-api-access-mstqg\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.396521 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-catalog-content\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.396664 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-utilities\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.396711 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/9849141a-d6d5-4378-acef-e2ace2a25e2e-kube-api-access-mstqg\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.397459 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-catalog-content\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.397633 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-utilities\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.424737 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/9849141a-d6d5-4378-acef-e2ace2a25e2e-kube-api-access-mstqg\") pod \"redhat-operators-jgfx4\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.499610 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.801105 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.903738 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-xpht8\" (UniqueName: \"kubernetes.io/projected/d28b8a9f-3568-47eb-9cb8-4b958070464b-kube-api-access-xpht8\") pod \"d28b8a9f-3568-47eb-9cb8-4b958070464b\" (UID: \"d28b8a9f-3568-47eb-9cb8-4b958070464b\") " Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.951223 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-jgfx4"] Jan 23 09:26:04 crc kubenswrapper[5117]: I0123 09:26:04.973637 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/d28b8a9f-3568-47eb-9cb8-4b958070464b-kube-api-access-xpht8" (OuterVolumeSpecName: "kube-api-access-xpht8") pod "d28b8a9f-3568-47eb-9cb8-4b958070464b" (UID: "d28b8a9f-3568-47eb-9cb8-4b958070464b"). InnerVolumeSpecName "kube-api-access-xpht8". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.005057 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-xpht8\" (UniqueName: \"kubernetes.io/projected/d28b8a9f-3568-47eb-9cb8-4b958070464b-kube-api-access-xpht8\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:05 crc kubenswrapper[5117]: W0123 09:26:05.076859 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod9849141a_d6d5_4378_acef_e2ace2a25e2e.slice/crio-e4921f36e7a176724d1a35b3f5e388500cd9891280fe6f998aba6ee82635cc91 WatchSource:0}: Error finding container e4921f36e7a176724d1a35b3f5e388500cd9891280fe6f998aba6ee82635cc91: Status 404 returned error can't find the container with id e4921f36e7a176724d1a35b3f5e388500cd9891280fe6f998aba6ee82635cc91 Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.534781 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486006-mqgks" event={"ID":"d28b8a9f-3568-47eb-9cb8-4b958070464b","Type":"ContainerDied","Data":"c31bf06ca15b22c8910d9db1d325e176dae82a1c44eb662be173498605698837"} Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.534831 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="c31bf06ca15b22c8910d9db1d325e176dae82a1c44eb662be173498605698837" Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.534931 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486006-mqgks" Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.543717 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerStarted","Data":"e4921f36e7a176724d1a35b3f5e388500cd9891280fe6f998aba6ee82635cc91"} Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.890617 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29486000-cskgv"] Jan 23 09:26:05 crc kubenswrapper[5117]: I0123 09:26:05.896830 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29486000-cskgv"] Jan 23 09:26:06 crc kubenswrapper[5117]: I0123 09:26:06.553270 5117 generic.go:358] "Generic (PLEG): container finished" podID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerID="1f1bb800819a3ccc0eb9db754bb81f9dec90c90531739787cae75a0c36caf07e" exitCode=0 Jan 23 09:26:06 crc kubenswrapper[5117]: I0123 09:26:06.553371 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerDied","Data":"1f1bb800819a3ccc0eb9db754bb81f9dec90c90531739787cae75a0c36caf07e"} Jan 23 09:26:06 crc kubenswrapper[5117]: I0123 09:26:06.555666 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"6853448b-d202-4d14-ba3b-20f05356a3c4","Type":"ContainerStarted","Data":"a390f7fc2c5ec65fbef66fc17fd808a23fb6ad72b2f86ac4d682c6bc91eba810"} Jan 23 09:26:06 crc kubenswrapper[5117]: I0123 09:26:06.779075 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4ea574d9-c557-439d-be8a-bf6da4bdc517" path="/var/lib/kubelet/pods/4ea574d9-c557-439d-be8a-bf6da4bdc517/volumes" Jan 23 09:26:09 crc kubenswrapper[5117]: I0123 09:26:09.348636 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerStarted","Data":"7fa6fbb65a2884d91bc4186ea9a37ba87cc6254f13f104c2519c572311bbd456"} Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.302851 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-vrgvk"] Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.303745 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="d28b8a9f-3568-47eb-9cb8-4b958070464b" containerName="oc" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.303767 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="d28b8a9f-3568-47eb-9cb8-4b958070464b" containerName="oc" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.303946 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="d28b8a9f-3568-47eb-9cb8-4b958070464b" containerName="oc" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.308741 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.311219 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-vrgvk"] Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.356844 5117 generic.go:358] "Generic (PLEG): container finished" podID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerID="7fa6fbb65a2884d91bc4186ea9a37ba87cc6254f13f104c2519c572311bbd456" exitCode=0 Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.356889 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerDied","Data":"7fa6fbb65a2884d91bc4186ea9a37ba87cc6254f13f104c2519c572311bbd456"} Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.381355 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-mh468\" (UniqueName: \"kubernetes.io/projected/0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9-kube-api-access-mh468\") pod \"default-snmp-webhook-694dc457d5-vrgvk\" (UID: \"0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.482664 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-mh468\" (UniqueName: \"kubernetes.io/projected/0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9-kube-api-access-mh468\") pod \"default-snmp-webhook-694dc457d5-vrgvk\" (UID: \"0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.518333 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-mh468\" (UniqueName: \"kubernetes.io/projected/0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9-kube-api-access-mh468\") pod \"default-snmp-webhook-694dc457d5-vrgvk\" (UID: \"0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9\") " pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" Jan 23 09:26:10 crc kubenswrapper[5117]: I0123 09:26:10.626016 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" Jan 23 09:26:11 crc kubenswrapper[5117]: I0123 09:26:11.026031 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-snmp-webhook-694dc457d5-vrgvk"] Jan 23 09:26:11 crc kubenswrapper[5117]: I0123 09:26:11.364336 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerStarted","Data":"117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c"} Jan 23 09:26:11 crc kubenswrapper[5117]: I0123 09:26:11.365612 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" event={"ID":"0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9","Type":"ContainerStarted","Data":"597dde395a92a07d8aee48b55c75fb8f24da60b8e4758f1c4869f615217d41f2"} Jan 23 09:26:11 crc kubenswrapper[5117]: I0123 09:26:11.379551 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/redhat-operators-jgfx4" podStartSLOduration=6.684251203 podStartE2EDuration="7.37953306s" podCreationTimestamp="2026-01-23 09:26:04 +0000 UTC" firstStartedPulling="2026-01-23 09:26:08.340498785 +0000 UTC m=+1980.096623811" lastFinishedPulling="2026-01-23 09:26:09.035780642 +0000 UTC m=+1980.791905668" observedRunningTime="2026-01-23 09:26:11.379280433 +0000 UTC m=+1983.135405459" watchObservedRunningTime="2026-01-23 09:26:11.37953306 +0000 UTC m=+1983.135658076" Jan 23 09:26:13 crc kubenswrapper[5117]: I0123 09:26:13.381878 5117 generic.go:358] "Generic (PLEG): container finished" podID="6853448b-d202-4d14-ba3b-20f05356a3c4" containerID="a390f7fc2c5ec65fbef66fc17fd808a23fb6ad72b2f86ac4d682c6bc91eba810" exitCode=0 Jan 23 09:26:13 crc kubenswrapper[5117]: I0123 09:26:13.381981 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"6853448b-d202-4d14-ba3b-20f05356a3c4","Type":"ContainerDied","Data":"a390f7fc2c5ec65fbef66fc17fd808a23fb6ad72b2f86ac4d682c6bc91eba810"} Jan 23 09:26:13 crc kubenswrapper[5117]: I0123 09:26:13.992584 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.080598 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.080872 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.084364 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-cluster-tls-config\"" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.084726 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-tls-assets-0\"" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.084847 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-alertmanager-proxy-tls\"" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.084907 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-stf-dockercfg-cdtv5\"" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.085285 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-generated\"" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.086720 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"alertmanager-default-web-config\"" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139327 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-config-volume\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139386 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139479 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ccdbp\" (UniqueName: \"kubernetes.io/projected/129d4009-3ec8-41de-8bbe-cd549253e78d-kube-api-access-ccdbp\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139524 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-web-config\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139545 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139854 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.139991 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/129d4009-3ec8-41de-8bbe-cd549253e78d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.140063 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/129d4009-3ec8-41de-8bbe-cd549253e78d-config-out\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.140097 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.241320 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.241383 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/129d4009-3ec8-41de-8bbe-cd549253e78d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.241434 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/129d4009-3ec8-41de-8bbe-cd549253e78d-config-out\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.241463 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.241511 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-config-volume\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.241547 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: E0123 09:26:14.242409 5117 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.242447 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ccdbp\" (UniqueName: \"kubernetes.io/projected/129d4009-3ec8-41de-8bbe-cd549253e78d-kube-api-access-ccdbp\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: E0123 09:26:14.242516 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls podName:129d4009-3ec8-41de-8bbe-cd549253e78d nodeName:}" failed. No retries permitted until 2026-01-23 09:26:14.742488547 +0000 UTC m=+1986.498613753 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "129d4009-3ec8-41de-8bbe-cd549253e78d") : secret "default-alertmanager-proxy-tls" not found Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.242590 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-web-config\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.242623 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.250718 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-config-volume\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.251112 5117 csi_attacher.go:373] kubernetes.io/csi: attacher.MountDevice STAGE_UNSTAGE_VOLUME capability not set. Skipping MountDevice... Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.251442 5117 operation_generator.go:557] "MountVolume.MountDevice succeeded for volume \"pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") device mount path \"/var/lib/kubelet/plugins/kubernetes.io/csi/kubevirt.io.hostpath-provisioner/6c0e5b4efd34dc8d8c6e81c68d20da0e40b9b3435e3024819cbf7a5cd78d1c8d/globalmount\"" pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.254786 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-out\" (UniqueName: \"kubernetes.io/empty-dir/129d4009-3ec8-41de-8bbe-cd549253e78d-config-out\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.255295 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"web-config\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-web-config\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.255580 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"tls-assets\" (UniqueName: \"kubernetes.io/projected/129d4009-3ec8-41de-8bbe-cd549253e78d-tls-assets\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.256002 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-session-secret\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-session-secret\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.256406 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"cluster-tls-config\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-cluster-tls-config\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.264346 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ccdbp\" (UniqueName: \"kubernetes.io/projected/129d4009-3ec8-41de-8bbe-cd549253e78d-kube-api-access-ccdbp\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.291928 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\" (UniqueName: \"kubernetes.io/csi/kubevirt.io.hostpath-provisioner^pvc-7cb35b73-a9c0-44a7-bdc7-4472ce0ffadd\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.500028 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.500469 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:14 crc kubenswrapper[5117]: I0123 09:26:14.752280 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:14 crc kubenswrapper[5117]: E0123 09:26:14.752435 5117 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 23 09:26:14 crc kubenswrapper[5117]: E0123 09:26:14.752522 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls podName:129d4009-3ec8-41de-8bbe-cd549253e78d nodeName:}" failed. No retries permitted until 2026-01-23 09:26:15.752481445 +0000 UTC m=+1987.508606461 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "129d4009-3ec8-41de-8bbe-cd549253e78d") : secret "default-alertmanager-proxy-tls" not found Jan 23 09:26:15 crc kubenswrapper[5117]: I0123 09:26:15.587584 5117 prober.go:120] "Probe failed" probeType="Startup" pod="openshift-marketplace/redhat-operators-jgfx4" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="registry-server" probeResult="failure" output=< Jan 23 09:26:15 crc kubenswrapper[5117]: timeout: failed to connect service ":50051" within 1s Jan 23 09:26:15 crc kubenswrapper[5117]: > Jan 23 09:26:15 crc kubenswrapper[5117]: I0123 09:26:15.770703 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:15 crc kubenswrapper[5117]: E0123 09:26:15.770855 5117 secret.go:189] Couldn't get secret service-telemetry/default-alertmanager-proxy-tls: secret "default-alertmanager-proxy-tls" not found Jan 23 09:26:15 crc kubenswrapper[5117]: E0123 09:26:15.770934 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls podName:129d4009-3ec8-41de-8bbe-cd549253e78d nodeName:}" failed. No retries permitted until 2026-01-23 09:26:17.770917296 +0000 UTC m=+1989.527042322 (durationBeforeRetry 2s). Error: MountVolume.SetUp failed for volume "secret-default-alertmanager-proxy-tls" (UniqueName: "kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls") pod "alertmanager-default-0" (UID: "129d4009-3ec8-41de-8bbe-cd549253e78d") : secret "default-alertmanager-proxy-tls" not found Jan 23 09:26:16 crc kubenswrapper[5117]: I0123 09:26:16.872198 5117 scope.go:117] "RemoveContainer" containerID="cbab37bcbe0c7a08de05019a18c98e913b189d959643d2998e1c077fa54fd662" Jan 23 09:26:17 crc kubenswrapper[5117]: I0123 09:26:17.803895 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:17 crc kubenswrapper[5117]: I0123 09:26:17.819324 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-default-alertmanager-proxy-tls\" (UniqueName: \"kubernetes.io/secret/129d4009-3ec8-41de-8bbe-cd549253e78d-secret-default-alertmanager-proxy-tls\") pod \"alertmanager-default-0\" (UID: \"129d4009-3ec8-41de-8bbe-cd549253e78d\") " pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:18 crc kubenswrapper[5117]: I0123 09:26:18.014703 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/alertmanager-default-0" Jan 23 09:26:20 crc kubenswrapper[5117]: I0123 09:26:20.364205 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/alertmanager-default-0"] Jan 23 09:26:24 crc kubenswrapper[5117]: I0123 09:26:24.547102 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:24 crc kubenswrapper[5117]: I0123 09:26:24.591300 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:24 crc kubenswrapper[5117]: I0123 09:26:24.781769 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgfx4"] Jan 23 09:26:26 crc kubenswrapper[5117]: I0123 09:26:26.485446 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/redhat-operators-jgfx4" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="registry-server" containerID="cri-o://117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c" gracePeriod=2 Jan 23 09:26:27 crc kubenswrapper[5117]: I0123 09:26:27.493033 5117 generic.go:358] "Generic (PLEG): container finished" podID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerID="117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c" exitCode=0 Jan 23 09:26:27 crc kubenswrapper[5117]: I0123 09:26:27.493077 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerDied","Data":"117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c"} Jan 23 09:26:28 crc kubenswrapper[5117]: I0123 09:26:28.501325 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"129d4009-3ec8-41de-8bbe-cd549253e78d","Type":"ContainerStarted","Data":"ff587f45a46877b557cb7941144dcea64a90d843271e0b77901bf5dc8491f7b0"} Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.199013 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6"] Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.403700 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6"] Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.403959 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.406918 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-session-secret\"" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.407014 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"smart-gateway-dockercfg-r5zht\"" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.407267 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-proxy-tls\"" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.408089 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-meter-sg-core-configmap\"" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.441000 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-d858v\" (UniqueName: \"kubernetes.io/projected/79b9017f-cee9-4b84-a6d2-1bd8843c9538-kube-api-access-d858v\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.441143 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.441252 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.441298 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/79b9017f-cee9-4b84-a6d2-1bd8843c9538-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.441342 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/79b9017f-cee9-4b84-a6d2-1bd8843c9538-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.543913 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.544026 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.544066 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/79b9017f-cee9-4b84-a6d2-1bd8843c9538-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.544098 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/79b9017f-cee9-4b84-a6d2-1bd8843c9538-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.544145 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-d858v\" (UniqueName: \"kubernetes.io/projected/79b9017f-cee9-4b84-a6d2-1bd8843c9538-kube-api-access-d858v\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: E0123 09:26:32.544685 5117 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 09:26:32 crc kubenswrapper[5117]: E0123 09:26:32.544770 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls podName:79b9017f-cee9-4b84-a6d2-1bd8843c9538 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:33.044752814 +0000 UTC m=+2004.800877840 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" (UID: "79b9017f-cee9-4b84-a6d2-1bd8843c9538") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.551941 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/79b9017f-cee9-4b84-a6d2-1bd8843c9538-socket-dir\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.553930 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/79b9017f-cee9-4b84-a6d2-1bd8843c9538-sg-core-config\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.564956 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-session-secret\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:32 crc kubenswrapper[5117]: I0123 09:26:32.577884 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-d858v\" (UniqueName: \"kubernetes.io/projected/79b9017f-cee9-4b84-a6d2-1bd8843c9538-kube-api-access-d858v\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:33 crc kubenswrapper[5117]: I0123 09:26:33.051700 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:33 crc kubenswrapper[5117]: E0123 09:26:33.051919 5117 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-coll-meter-proxy-tls: secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 09:26:33 crc kubenswrapper[5117]: E0123 09:26:33.052049 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls podName:79b9017f-cee9-4b84-a6d2-1bd8843c9538 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:34.052022474 +0000 UTC m=+2005.808147660 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-coll-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls") pod "default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" (UID: "79b9017f-cee9-4b84-a6d2-1bd8843c9538") : secret "default-cloud1-coll-meter-proxy-tls" not found Jan 23 09:26:34 crc kubenswrapper[5117]: I0123 09:26:34.070233 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:34 crc kubenswrapper[5117]: I0123 09:26:34.083288 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-coll-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/79b9017f-cee9-4b84-a6d2-1bd8843c9538-default-cloud1-coll-meter-proxy-tls\") pod \"default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6\" (UID: \"79b9017f-cee9-4b84-a6d2-1bd8843c9538\") " pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:34 crc kubenswrapper[5117]: I0123 09:26:34.224424 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" Jan 23 09:26:34 crc kubenswrapper[5117]: E0123 09:26:34.549298 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c is running failed: container process not found" containerID="117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 09:26:34 crc kubenswrapper[5117]: E0123 09:26:34.549924 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c is running failed: container process not found" containerID="117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 09:26:34 crc kubenswrapper[5117]: E0123 09:26:34.550207 5117 log.go:32] "ExecSync cmd from runtime service failed" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c is running failed: container process not found" containerID="117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c" cmd=["grpc_health_probe","-addr=:50051"] Jan 23 09:26:34 crc kubenswrapper[5117]: E0123 09:26:34.550247 5117 prober.go:104] "Probe errored" err="rpc error: code = NotFound desc = container is not created or running: checking if PID of 117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c is running failed: container process not found" probeType="Readiness" pod="openshift-marketplace/redhat-operators-jgfx4" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="registry-server" probeResult="unknown" Jan 23 09:26:35 crc kubenswrapper[5117]: I0123 09:26:35.985986 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5"] Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.006520 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.006359 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5"] Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.012057 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-proxy-tls\"" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.012213 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-meter-sg-core-configmap\"" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.087008 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.100366 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4384e25c-87ff-41fc-8c37-e85b49dc0035-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.100455 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ctqqw\" (UniqueName: \"kubernetes.io/projected/4384e25c-87ff-41fc-8c37-e85b49dc0035-kube-api-access-ctqqw\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.100567 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.100592 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.100616 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4384e25c-87ff-41fc-8c37-e85b49dc0035-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.201801 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/9849141a-d6d5-4378-acef-e2ace2a25e2e-kube-api-access-mstqg\") pod \"9849141a-d6d5-4378-acef-e2ace2a25e2e\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.201944 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-utilities\") pod \"9849141a-d6d5-4378-acef-e2ace2a25e2e\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.202782 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-utilities" (OuterVolumeSpecName: "utilities") pod "9849141a-d6d5-4378-acef-e2ace2a25e2e" (UID: "9849141a-d6d5-4378-acef-e2ace2a25e2e"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.202871 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-catalog-content\") pod \"9849141a-d6d5-4378-acef-e2ace2a25e2e\" (UID: \"9849141a-d6d5-4378-acef-e2ace2a25e2e\") " Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.210548 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4384e25c-87ff-41fc-8c37-e85b49dc0035-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.210643 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ctqqw\" (UniqueName: \"kubernetes.io/projected/4384e25c-87ff-41fc-8c37-e85b49dc0035-kube-api-access-ctqqw\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.210876 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.210918 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.210965 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4384e25c-87ff-41fc-8c37-e85b49dc0035-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.211153 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.211511 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/4384e25c-87ff-41fc-8c37-e85b49dc0035-socket-dir\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.211833 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/4384e25c-87ff-41fc-8c37-e85b49dc0035-sg-core-config\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: E0123 09:26:36.211925 5117 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 09:26:36 crc kubenswrapper[5117]: E0123 09:26:36.212042 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls podName:4384e25c-87ff-41fc-8c37-e85b49dc0035 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:36.712021481 +0000 UTC m=+2008.468146507 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" (UID: "4384e25c-87ff-41fc-8c37-e85b49dc0035") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.217316 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/9849141a-d6d5-4378-acef-e2ace2a25e2e-kube-api-access-mstqg" (OuterVolumeSpecName: "kube-api-access-mstqg") pod "9849141a-d6d5-4378-acef-e2ace2a25e2e" (UID: "9849141a-d6d5-4378-acef-e2ace2a25e2e"). InnerVolumeSpecName "kube-api-access-mstqg". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.228979 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-session-secret\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.235207 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ctqqw\" (UniqueName: \"kubernetes.io/projected/4384e25c-87ff-41fc-8c37-e85b49dc0035-kube-api-access-ctqqw\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.312767 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-mstqg\" (UniqueName: \"kubernetes.io/projected/9849141a-d6d5-4378-acef-e2ace2a25e2e-kube-api-access-mstqg\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.313595 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "9849141a-d6d5-4378-acef-e2ace2a25e2e" (UID: "9849141a-d6d5-4378-acef-e2ace2a25e2e"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.414383 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/9849141a-d6d5-4378-acef-e2ace2a25e2e-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.563371 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-jgfx4" event={"ID":"9849141a-d6d5-4378-acef-e2ace2a25e2e","Type":"ContainerDied","Data":"e4921f36e7a176724d1a35b3f5e388500cd9891280fe6f998aba6ee82635cc91"} Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.563430 5117 scope.go:117] "RemoveContainer" containerID="117f8a3517a21cd7f8e8f85b2d0f762a108ae98af6a6cfe71a26ec942f81251c" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.563449 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-jgfx4" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.596207 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/redhat-operators-jgfx4"] Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.604735 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/redhat-operators-jgfx4"] Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.719859 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:36 crc kubenswrapper[5117]: E0123 09:26:36.720070 5117 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-ceil-meter-proxy-tls: secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 09:26:36 crc kubenswrapper[5117]: E0123 09:26:36.720175 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls podName:4384e25c-87ff-41fc-8c37-e85b49dc0035 nodeName:}" failed. No retries permitted until 2026-01-23 09:26:37.720155614 +0000 UTC m=+2009.476280640 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-ceil-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls") pod "default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" (UID: "4384e25c-87ff-41fc-8c37-e85b49dc0035") : secret "default-cloud1-ceil-meter-proxy-tls" not found Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.734705 5117 scope.go:117] "RemoveContainer" containerID="7fa6fbb65a2884d91bc4186ea9a37ba87cc6254f13f104c2519c572311bbd456" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.782614 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" path="/var/lib/kubelet/pods/9849141a-d6d5-4378-acef-e2ace2a25e2e/volumes" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.789896 5117 scope.go:117] "RemoveContainer" containerID="1f1bb800819a3ccc0eb9db754bb81f9dec90c90531739787cae75a0c36caf07e" Jan 23 09:26:36 crc kubenswrapper[5117]: I0123 09:26:36.967654 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6"] Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.571604 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" event={"ID":"0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9","Type":"ContainerStarted","Data":"cb8bdca2e0d2d80430b34ff1ff946ea461d67b9a1fabd2e7bcaca5533172d196"} Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.575180 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerStarted","Data":"e3aaefca0891098504962748041a5ad79715020cc16856cc3d5b3223505334b5"} Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.577825 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"6853448b-d202-4d14-ba3b-20f05356a3c4","Type":"ContainerStarted","Data":"1c9b73d8159f84097d410e7c7f992a45d13b5f393ce1434d405dc2552386bbbb"} Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.594124 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-snmp-webhook-694dc457d5-vrgvk" podStartSLOduration=1.803277166 podStartE2EDuration="27.594101163s" podCreationTimestamp="2026-01-23 09:26:10 +0000 UTC" firstStartedPulling="2026-01-23 09:26:11.040596038 +0000 UTC m=+1982.796721064" lastFinishedPulling="2026-01-23 09:26:36.831420035 +0000 UTC m=+2008.587545061" observedRunningTime="2026-01-23 09:26:37.587860249 +0000 UTC m=+2009.343985275" watchObservedRunningTime="2026-01-23 09:26:37.594101163 +0000 UTC m=+2009.350226189" Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.734750 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.741999 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-ceil-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/4384e25c-87ff-41fc-8c37-e85b49dc0035-default-cloud1-ceil-meter-proxy-tls\") pod \"default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5\" (UID: \"4384e25c-87ff-41fc-8c37-e85b49dc0035\") " pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:37 crc kubenswrapper[5117]: I0123 09:26:37.823926 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" Jan 23 09:26:38 crc kubenswrapper[5117]: I0123 09:26:38.141939 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5"] Jan 23 09:26:38 crc kubenswrapper[5117]: W0123 09:26:38.145364 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod4384e25c_87ff_41fc_8c37_e85b49dc0035.slice/crio-0380e1d3a775ccf4e189cb46071f3da5f25be50284018198cef8f3cc29ed8bab WatchSource:0}: Error finding container 0380e1d3a775ccf4e189cb46071f3da5f25be50284018198cef8f3cc29ed8bab: Status 404 returned error can't find the container with id 0380e1d3a775ccf4e189cb46071f3da5f25be50284018198cef8f3cc29ed8bab Jan 23 09:26:38 crc kubenswrapper[5117]: I0123 09:26:38.590785 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerStarted","Data":"0380e1d3a775ccf4e189cb46071f3da5f25be50284018198cef8f3cc29ed8bab"} Jan 23 09:26:39 crc kubenswrapper[5117]: I0123 09:26:39.601884 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"129d4009-3ec8-41de-8bbe-cd549253e78d","Type":"ContainerStarted","Data":"03aaa5b54d6a756c7ddb64bb204ada3b7d5f95e1dd1f16da8397c7f9c6b7892b"} Jan 23 09:26:39 crc kubenswrapper[5117]: I0123 09:26:39.608109 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"6853448b-d202-4d14-ba3b-20f05356a3c4","Type":"ContainerStarted","Data":"f3fedd5758a107c87a41229f8ecdca29a01a4af450eef6685cb29f34086455a4"} Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792065 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt"] Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792804 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="registry-server" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792822 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="registry-server" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792848 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="extract-content" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792854 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="extract-content" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792864 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="extract-utilities" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.792870 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="extract-utilities" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.793003 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="9849141a-d6d5-4378-acef-e2ace2a25e2e" containerName="registry-server" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.824678 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt"] Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.824856 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.827615 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-proxy-tls\"" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.832659 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-sens-meter-sg-core-configmap\"" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.985504 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-xwwkd\" (UniqueName: \"kubernetes.io/projected/9f58aef0-7287-43a4-b280-bbf650fec94d-kube-api-access-xwwkd\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.985612 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.985682 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9f58aef0-7287-43a4-b280-bbf650fec94d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.985758 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:40 crc kubenswrapper[5117]: I0123 09:26:40.985808 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f58aef0-7287-43a4-b280-bbf650fec94d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.087315 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-xwwkd\" (UniqueName: \"kubernetes.io/projected/9f58aef0-7287-43a4-b280-bbf650fec94d-kube-api-access-xwwkd\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.087420 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.087471 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9f58aef0-7287-43a4-b280-bbf650fec94d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.087536 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: E0123 09:26:41.087561 5117 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.087563 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f58aef0-7287-43a4-b280-bbf650fec94d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: E0123 09:26:41.087627 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls podName:9f58aef0-7287-43a4-b280-bbf650fec94d nodeName:}" failed. No retries permitted until 2026-01-23 09:26:41.587609029 +0000 UTC m=+2013.343734045 (durationBeforeRetry 500ms). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" (UID: "9f58aef0-7287-43a4-b280-bbf650fec94d") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.088646 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/9f58aef0-7287-43a4-b280-bbf650fec94d-socket-dir\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.089205 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/9f58aef0-7287-43a4-b280-bbf650fec94d-sg-core-config\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.106288 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"session-secret\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-session-secret\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.106765 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-xwwkd\" (UniqueName: \"kubernetes.io/projected/9f58aef0-7287-43a4-b280-bbf650fec94d-kube-api-access-xwwkd\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: I0123 09:26:41.597296 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:41 crc kubenswrapper[5117]: E0123 09:26:41.597416 5117 secret.go:189] Couldn't get secret service-telemetry/default-cloud1-sens-meter-proxy-tls: secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 09:26:41 crc kubenswrapper[5117]: E0123 09:26:41.597519 5117 nestedpendingoperations.go:348] Operation for "{volumeName:kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls podName:9f58aef0-7287-43a4-b280-bbf650fec94d nodeName:}" failed. No retries permitted until 2026-01-23 09:26:42.597502422 +0000 UTC m=+2014.353627448 (durationBeforeRetry 1s). Error: MountVolume.SetUp failed for volume "default-cloud1-sens-meter-proxy-tls" (UniqueName: "kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls") pod "default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" (UID: "9f58aef0-7287-43a4-b280-bbf650fec94d") : secret "default-cloud1-sens-meter-proxy-tls" not found Jan 23 09:26:42 crc kubenswrapper[5117]: I0123 09:26:42.614722 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:42 crc kubenswrapper[5117]: I0123 09:26:42.620551 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-cloud1-sens-meter-proxy-tls\" (UniqueName: \"kubernetes.io/secret/9f58aef0-7287-43a4-b280-bbf650fec94d-default-cloud1-sens-meter-proxy-tls\") pod \"default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt\" (UID: \"9f58aef0-7287-43a4-b280-bbf650fec94d\") " pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:42 crc kubenswrapper[5117]: I0123 09:26:42.649631 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" Jan 23 09:26:46 crc kubenswrapper[5117]: I0123 09:26:46.678103 5117 generic.go:358] "Generic (PLEG): container finished" podID="129d4009-3ec8-41de-8bbe-cd549253e78d" containerID="03aaa5b54d6a756c7ddb64bb204ada3b7d5f95e1dd1f16da8397c7f9c6b7892b" exitCode=0 Jan 23 09:26:46 crc kubenswrapper[5117]: I0123 09:26:46.678556 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"129d4009-3ec8-41de-8bbe-cd549253e78d","Type":"ContainerDied","Data":"03aaa5b54d6a756c7ddb64bb204ada3b7d5f95e1dd1f16da8397c7f9c6b7892b"} Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.090708 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt"] Jan 23 09:26:48 crc kubenswrapper[5117]: W0123 09:26:48.103218 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod9f58aef0_7287_43a4_b280_bbf650fec94d.slice/crio-c2414fe8a84df38cd8462e56004f9b67e5fb0ef7a3554fa05278d05947b70ed0 WatchSource:0}: Error finding container c2414fe8a84df38cd8462e56004f9b67e5fb0ef7a3554fa05278d05947b70ed0: Status 404 returned error can't find the container with id c2414fe8a84df38cd8462e56004f9b67e5fb0ef7a3554fa05278d05947b70ed0 Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.695603 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/prometheus-default-0" event={"ID":"6853448b-d202-4d14-ba3b-20f05356a3c4","Type":"ContainerStarted","Data":"22c2498c3ccef700acf573da37e969e77d7536649e05b64c1d97d40f95f1290f"} Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.700594 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerStarted","Data":"c2414fe8a84df38cd8462e56004f9b67e5fb0ef7a3554fa05278d05947b70ed0"} Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.718315 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerStarted","Data":"2768defa387e3801688140eca4ae004090dd91c1f0ba1570ade8da9492dc094d"} Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.722638 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerStarted","Data":"627fa3e4b2a1286ea963985ec13bb0e3e5b3a430f12c9329a6ccfdc6a6bcb9fe"} Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.726925 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/prometheus-default-0" podStartSLOduration=4.010413852 podStartE2EDuration="50.726907594s" podCreationTimestamp="2026-01-23 09:25:58 +0000 UTC" firstStartedPulling="2026-01-23 09:26:01.31725121 +0000 UTC m=+1973.073376236" lastFinishedPulling="2026-01-23 09:26:48.033744952 +0000 UTC m=+2019.789869978" observedRunningTime="2026-01-23 09:26:48.72104003 +0000 UTC m=+2020.477165076" watchObservedRunningTime="2026-01-23 09:26:48.726907594 +0000 UTC m=+2020.483032620" Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.933698 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4"] Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.946604 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.949526 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-coll-event-sg-core-configmap\"" Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.950345 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"elasticsearch-es-cert\"" Jan 23 09:26:48 crc kubenswrapper[5117]: I0123 09:26:48.953374 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4"] Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.030697 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/23260332-e384-4fc3-bcaf-3034ddf99446-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.030754 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/23260332-e384-4fc3-bcaf-3034ddf99446-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.030818 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/23260332-e384-4fc3-bcaf-3034ddf99446-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.030855 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-646fw\" (UniqueName: \"kubernetes.io/projected/23260332-e384-4fc3-bcaf-3034ddf99446-kube-api-access-646fw\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.131725 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/23260332-e384-4fc3-bcaf-3034ddf99446-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.131796 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-646fw\" (UniqueName: \"kubernetes.io/projected/23260332-e384-4fc3-bcaf-3034ddf99446-kube-api-access-646fw\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.131862 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/23260332-e384-4fc3-bcaf-3034ddf99446-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.131902 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/23260332-e384-4fc3-bcaf-3034ddf99446-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.132343 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/23260332-e384-4fc3-bcaf-3034ddf99446-socket-dir\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.132772 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/23260332-e384-4fc3-bcaf-3034ddf99446-sg-core-config\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.149022 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/23260332-e384-4fc3-bcaf-3034ddf99446-elastic-certs\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.162443 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-646fw\" (UniqueName: \"kubernetes.io/projected/23260332-e384-4fc3-bcaf-3034ddf99446-kube-api-access-646fw\") pod \"default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4\" (UID: \"23260332-e384-4fc3-bcaf-3034ddf99446\") " pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.246031 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm"] Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.271421 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.657976 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm"] Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.658118 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.661167 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"default-cloud1-ceil-event-sg-core-configmap\"" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.744820 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/966f83ef-f268-49d3-acca-8efb03045554-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.744987 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/966f83ef-f268-49d3-acca-8efb03045554-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.745257 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ztpgb\" (UniqueName: \"kubernetes.io/projected/966f83ef-f268-49d3-acca-8efb03045554-kube-api-access-ztpgb\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.745301 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/966f83ef-f268-49d3-acca-8efb03045554-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.847360 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/966f83ef-f268-49d3-acca-8efb03045554-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.847803 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/966f83ef-f268-49d3-acca-8efb03045554-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.848909 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sg-core-config\" (UniqueName: \"kubernetes.io/configmap/966f83ef-f268-49d3-acca-8efb03045554-sg-core-config\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.849723 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ztpgb\" (UniqueName: \"kubernetes.io/projected/966f83ef-f268-49d3-acca-8efb03045554-kube-api-access-ztpgb\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.849860 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/966f83ef-f268-49d3-acca-8efb03045554-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.850403 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"socket-dir\" (UniqueName: \"kubernetes.io/empty-dir/966f83ef-f268-49d3-acca-8efb03045554-socket-dir\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.856832 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"elastic-certs\" (UniqueName: \"kubernetes.io/secret/966f83ef-f268-49d3-acca-8efb03045554-elastic-certs\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.876521 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ztpgb\" (UniqueName: \"kubernetes.io/projected/966f83ef-f268-49d3-acca-8efb03045554-kube-api-access-ztpgb\") pod \"default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm\" (UID: \"966f83ef-f268-49d3-acca-8efb03045554\") " pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:49 crc kubenswrapper[5117]: I0123 09:26:49.987776 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" Jan 23 09:26:51 crc kubenswrapper[5117]: I0123 09:26:51.069545 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/prometheus-default-0" Jan 23 09:26:57 crc kubenswrapper[5117]: I0123 09:26:57.388180 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4"] Jan 23 09:26:57 crc kubenswrapper[5117]: I0123 09:26:57.395365 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:26:57 crc kubenswrapper[5117]: I0123 09:26:57.411067 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm"] Jan 23 09:26:57 crc kubenswrapper[5117]: W0123 09:26:57.415002 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod966f83ef_f268_49d3_acca_8efb03045554.slice/crio-118dfb3accab408bab337c0e41abc72022252cae09c3ce9fae83fd4e9272e308 WatchSource:0}: Error finding container 118dfb3accab408bab337c0e41abc72022252cae09c3ce9fae83fd4e9272e308: Status 404 returned error can't find the container with id 118dfb3accab408bab337c0e41abc72022252cae09c3ce9fae83fd4e9272e308 Jan 23 09:26:57 crc kubenswrapper[5117]: I0123 09:26:57.790461 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerStarted","Data":"130d998bc584f2896d5ed9283b4c1265368dadc293a2ec66826adaabb1d4bb19"} Jan 23 09:26:57 crc kubenswrapper[5117]: I0123 09:26:57.801582 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerStarted","Data":"da6ccbac1a21dac0f55dbb68b075b18cb030428778ffcde5985b1fe90fa245f9"} Jan 23 09:26:57 crc kubenswrapper[5117]: I0123 09:26:57.803458 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerStarted","Data":"118dfb3accab408bab337c0e41abc72022252cae09c3ce9fae83fd4e9272e308"} Jan 23 09:26:58 crc kubenswrapper[5117]: I0123 09:26:58.816679 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerStarted","Data":"c12da176769c7a2c1eecd9986bf0084560468917d570bd8a5b3a3bee7d8e16d6"} Jan 23 09:26:58 crc kubenswrapper[5117]: I0123 09:26:58.820519 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerStarted","Data":"d205c8ac5bd185fb7e38ed3bd2aa56fae8e6276bd94a882c234a26115fbe93d2"} Jan 23 09:26:58 crc kubenswrapper[5117]: I0123 09:26:58.823403 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"129d4009-3ec8-41de-8bbe-cd549253e78d","Type":"ContainerStarted","Data":"e925ca3f6af04e25e8317caa97954ed69084c306bea04e1777037e7922459e01"} Jan 23 09:26:59 crc kubenswrapper[5117]: I0123 09:26:59.832519 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerStarted","Data":"4071710389d79eeada2471d70c63709104a31e91810a9ef95d56dcdfbd6653d8"} Jan 23 09:26:59 crc kubenswrapper[5117]: I0123 09:26:59.836243 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerStarted","Data":"0c28b97d5921cdaafe4575e2b3a1e197d379e4757c33a4a367b9aa744f634f4f"} Jan 23 09:26:59 crc kubenswrapper[5117]: I0123 09:26:59.839966 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerStarted","Data":"46319205ef3d58a016dad79308bb41fa939aeaf366e1f3ac557b9d814bd9c534"} Jan 23 09:27:01 crc kubenswrapper[5117]: I0123 09:27:01.069775 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/prometheus-default-0" Jan 23 09:27:01 crc kubenswrapper[5117]: I0123 09:27:01.130225 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/prometheus-default-0" Jan 23 09:27:01 crc kubenswrapper[5117]: I0123 09:27:01.864790 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"129d4009-3ec8-41de-8bbe-cd549253e78d","Type":"ContainerStarted","Data":"a19b359f509388e3438020e3782a37d4ffb2f761c5eac20f621ab2698c6d252f"} Jan 23 09:27:01 crc kubenswrapper[5117]: I0123 09:27:01.909300 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/prometheus-default-0" Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.912700 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerStarted","Data":"1408be835c3bfab0dc95510c188799bc0ee765f0965088cd3e14bd4db00703f3"} Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.916287 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/alertmanager-default-0" event={"ID":"129d4009-3ec8-41de-8bbe-cd549253e78d","Type":"ContainerStarted","Data":"1619a1c6424232f170acc52a80750c660099c2e788bc8efd7dc8badb51a45814"} Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.919065 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerStarted","Data":"e080668a1a968453c77aafdf30d8fe3057a03e621aa41a500307524a382dcd64"} Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.921802 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerStarted","Data":"4d9dc7f509d6fd28016b6e773fa1ade86b7e6f933ccbe768f5afe52198313194"} Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.925119 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerStarted","Data":"472cda81957085c96153a9b0b39f21756c0d37b582830d2e240464ec87be1fc5"} Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.928165 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerStarted","Data":"c2adc1378e5f4eea57a44da2e2c904ea069d0632dbeaa4cb1d0a26f62e70be88"} Jan 23 09:27:05 crc kubenswrapper[5117]: I0123 09:27:05.945532 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" podStartSLOduration=4.274069439 podStartE2EDuration="30.945512926s" podCreationTimestamp="2026-01-23 09:26:35 +0000 UTC" firstStartedPulling="2026-01-23 09:26:38.14719138 +0000 UTC m=+2009.903316406" lastFinishedPulling="2026-01-23 09:27:04.818634867 +0000 UTC m=+2036.574759893" observedRunningTime="2026-01-23 09:27:05.935198549 +0000 UTC m=+2037.691323575" watchObservedRunningTime="2026-01-23 09:27:05.945512926 +0000 UTC m=+2037.701637952" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.014118 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" podStartSLOduration=9.57483541 podStartE2EDuration="17.014093588s" podCreationTimestamp="2026-01-23 09:26:49 +0000 UTC" firstStartedPulling="2026-01-23 09:26:57.418259673 +0000 UTC m=+2029.174384699" lastFinishedPulling="2026-01-23 09:27:04.857517851 +0000 UTC m=+2036.613642877" observedRunningTime="2026-01-23 09:27:05.982506588 +0000 UTC m=+2037.738631624" watchObservedRunningTime="2026-01-23 09:27:06.014093588 +0000 UTC m=+2037.770218614" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.017706 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/alertmanager-default-0" podStartSLOduration=35.895075617 podStartE2EDuration="54.017686188s" podCreationTimestamp="2026-01-23 09:26:12 +0000 UTC" firstStartedPulling="2026-01-23 09:26:46.679850975 +0000 UTC m=+2018.435976001" lastFinishedPulling="2026-01-23 09:27:04.802461546 +0000 UTC m=+2036.558586572" observedRunningTime="2026-01-23 09:27:06.016637019 +0000 UTC m=+2037.772762065" watchObservedRunningTime="2026-01-23 09:27:06.017686188 +0000 UTC m=+2037.773811214" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.044316 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" podStartSLOduration=10.640928813 podStartE2EDuration="18.04428911s" podCreationTimestamp="2026-01-23 09:26:48 +0000 UTC" firstStartedPulling="2026-01-23 09:26:57.397951297 +0000 UTC m=+2029.154076323" lastFinishedPulling="2026-01-23 09:27:04.801311594 +0000 UTC m=+2036.557436620" observedRunningTime="2026-01-23 09:27:06.036700498 +0000 UTC m=+2037.792825534" watchObservedRunningTime="2026-01-23 09:27:06.04428911 +0000 UTC m=+2037.800414136" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.057303 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" podStartSLOduration=9.250415205 podStartE2EDuration="26.057282182s" podCreationTimestamp="2026-01-23 09:26:40 +0000 UTC" firstStartedPulling="2026-01-23 09:26:48.108394913 +0000 UTC m=+2019.864519929" lastFinishedPulling="2026-01-23 09:27:04.91526188 +0000 UTC m=+2036.671386906" observedRunningTime="2026-01-23 09:27:06.056521831 +0000 UTC m=+2037.812646867" watchObservedRunningTime="2026-01-23 09:27:06.057282182 +0000 UTC m=+2037.813407208" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.083963 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" podStartSLOduration=6.164457251 podStartE2EDuration="34.083940595s" podCreationTimestamp="2026-01-23 09:26:32 +0000 UTC" firstStartedPulling="2026-01-23 09:26:36.96971517 +0000 UTC m=+2008.725840196" lastFinishedPulling="2026-01-23 09:27:04.889198514 +0000 UTC m=+2036.645323540" observedRunningTime="2026-01-23 09:27:06.081207629 +0000 UTC m=+2037.837332675" watchObservedRunningTime="2026-01-23 09:27:06.083940595 +0000 UTC m=+2037.840065621" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.437813 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-g2mnr"] Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.438103 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" podUID="f5adda23-da5c-4bf9-9074-7a4deaab679e" containerName="default-interconnect" containerID="cri-o://c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c" gracePeriod=30 Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.816238 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.858735 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-q72v2"] Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.859549 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="f5adda23-da5c-4bf9-9074-7a4deaab679e" containerName="default-interconnect" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.859570 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="f5adda23-da5c-4bf9-9074-7a4deaab679e" containerName="default-interconnect" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.859865 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="f5adda23-da5c-4bf9-9074-7a4deaab679e" containerName="default-interconnect" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.876302 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.882387 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-q72v2"] Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.942959 5117 generic.go:358] "Generic (PLEG): container finished" podID="4384e25c-87ff-41fc-8c37-e85b49dc0035" containerID="130d998bc584f2896d5ed9283b4c1265368dadc293a2ec66826adaabb1d4bb19" exitCode=0 Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.943091 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerDied","Data":"130d998bc584f2896d5ed9283b4c1265368dadc293a2ec66826adaabb1d4bb19"} Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.943709 5117 scope.go:117] "RemoveContainer" containerID="130d998bc584f2896d5ed9283b4c1265368dadc293a2ec66826adaabb1d4bb19" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.948740 5117 generic.go:358] "Generic (PLEG): container finished" podID="23260332-e384-4fc3-bcaf-3034ddf99446" containerID="46319205ef3d58a016dad79308bb41fa939aeaf366e1f3ac557b9d814bd9c534" exitCode=0 Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.949548 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerDied","Data":"46319205ef3d58a016dad79308bb41fa939aeaf366e1f3ac557b9d814bd9c534"} Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.951389 5117 scope.go:117] "RemoveContainer" containerID="46319205ef3d58a016dad79308bb41fa939aeaf366e1f3ac557b9d814bd9c534" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.957634 5117 generic.go:358] "Generic (PLEG): container finished" podID="f5adda23-da5c-4bf9-9074-7a4deaab679e" containerID="c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c" exitCode=0 Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.957814 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" event={"ID":"f5adda23-da5c-4bf9-9074-7a4deaab679e","Type":"ContainerDied","Data":"c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c"} Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.957877 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" event={"ID":"f5adda23-da5c-4bf9-9074-7a4deaab679e","Type":"ContainerDied","Data":"4141468fb2bfc2a79462ad85a840a198bb8298b09a343c704878bf1c4d925afb"} Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.957899 5117 scope.go:117] "RemoveContainer" containerID="c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.958117 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-g2mnr" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962665 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-credentials\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962762 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-config\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962789 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-credentials\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962827 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-users\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962897 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-ca\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962920 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-ca\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.962990 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lfl7d\" (UniqueName: \"kubernetes.io/projected/f5adda23-da5c-4bf9-9074-7a4deaab679e-kube-api-access-lfl7d\") pod \"f5adda23-da5c-4bf9-9074-7a4deaab679e\" (UID: \"f5adda23-da5c-4bf9-9074-7a4deaab679e\") " Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.963103 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-52tb2\" (UniqueName: \"kubernetes.io/projected/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-kube-api-access-52tb2\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.963742 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-config" (OuterVolumeSpecName: "sasl-config") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "sasl-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.964075 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.964110 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.964155 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.966565 5117 generic.go:358] "Generic (PLEG): container finished" podID="966f83ef-f268-49d3-acca-8efb03045554" containerID="4071710389d79eeada2471d70c63709104a31e91810a9ef95d56dcdfbd6653d8" exitCode=0 Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.966912 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerDied","Data":"4071710389d79eeada2471d70c63709104a31e91810a9ef95d56dcdfbd6653d8"} Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.967379 5117 scope.go:117] "RemoveContainer" containerID="4071710389d79eeada2471d70c63709104a31e91810a9ef95d56dcdfbd6653d8" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.969232 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.970659 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-sasl-config\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.970696 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-sasl-users\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.970812 5117 reconciler_common.go:299] "Volume detached for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.977091 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-credentials" (OuterVolumeSpecName: "default-interconnect-inter-router-credentials") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "default-interconnect-inter-router-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.977669 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-users" (OuterVolumeSpecName: "sasl-users") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "sasl-users". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.977672 5117 generic.go:358] "Generic (PLEG): container finished" podID="79b9017f-cee9-4b84-a6d2-1bd8843c9538" containerID="c12da176769c7a2c1eecd9986bf0084560468917d570bd8a5b3a3bee7d8e16d6" exitCode=0 Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.977704 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerDied","Data":"c12da176769c7a2c1eecd9986bf0084560468917d570bd8a5b3a3bee7d8e16d6"} Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.978159 5117 scope.go:117] "RemoveContainer" containerID="c12da176769c7a2c1eecd9986bf0084560468917d570bd8a5b3a3bee7d8e16d6" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.978485 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/f5adda23-da5c-4bf9-9074-7a4deaab679e-kube-api-access-lfl7d" (OuterVolumeSpecName: "kube-api-access-lfl7d") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "kube-api-access-lfl7d". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.994737 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-credentials" (OuterVolumeSpecName: "default-interconnect-openstack-credentials") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "default-interconnect-openstack-credentials". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:27:06 crc kubenswrapper[5117]: I0123 09:27:06.994976 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-ca" (OuterVolumeSpecName: "default-interconnect-inter-router-ca") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "default-interconnect-inter-router-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.016410 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-ca" (OuterVolumeSpecName: "default-interconnect-openstack-ca") pod "f5adda23-da5c-4bf9-9074-7a4deaab679e" (UID: "f5adda23-da5c-4bf9-9074-7a4deaab679e"). InnerVolumeSpecName "default-interconnect-openstack-ca". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.037083 5117 generic.go:358] "Generic (PLEG): container finished" podID="9f58aef0-7287-43a4-b280-bbf650fec94d" containerID="0c28b97d5921cdaafe4575e2b3a1e197d379e4757c33a4a367b9aa744f634f4f" exitCode=0 Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.039355 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerDied","Data":"0c28b97d5921cdaafe4575e2b3a1e197d379e4757c33a4a367b9aa744f634f4f"} Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.041858 5117 scope.go:117] "RemoveContainer" containerID="0c28b97d5921cdaafe4575e2b3a1e197d379e4757c33a4a367b9aa744f634f4f" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074054 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074142 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-sasl-config\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074171 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-sasl-users\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074243 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-52tb2\" (UniqueName: \"kubernetes.io/projected/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-kube-api-access-52tb2\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074326 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074356 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074384 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074521 5117 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-credentials\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074537 5117 reconciler_common.go:299] "Volume detached for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-sasl-users\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074547 5117 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074557 5117 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-inter-router-ca\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074567 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lfl7d\" (UniqueName: \"kubernetes.io/projected/f5adda23-da5c-4bf9-9074-7a4deaab679e-kube-api-access-lfl7d\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.074581 5117 reconciler_common.go:299] "Volume detached for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/f5adda23-da5c-4bf9-9074-7a4deaab679e-default-interconnect-openstack-credentials\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.105208 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-ca\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-openstack-ca\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.105612 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-ca\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-inter-router-ca\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.105613 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-inter-router-credentials\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-inter-router-credentials\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.107517 5117 scope.go:117] "RemoveContainer" containerID="c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.108239 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-openstack-credentials\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-default-interconnect-openstack-credentials\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: E0123 09:27:07.108591 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c\": container with ID starting with c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c not found: ID does not exist" containerID="c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.108645 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c"} err="failed to get container status \"c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c\": rpc error: code = NotFound desc = could not find container \"c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c\": container with ID starting with c88aa5dc2b1acf571be6a3f806d18363e7a9b72ff2ae55b584cb40318110005c not found: ID does not exist" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.117119 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-52tb2\" (UniqueName: \"kubernetes.io/projected/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-kube-api-access-52tb2\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.157664 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-config\" (UniqueName: \"kubernetes.io/configmap/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-sasl-config\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.167268 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sasl-users\" (UniqueName: \"kubernetes.io/secret/c12ea6ce-5bef-42c3-b1ad-71826c74f0ce-sasl-users\") pod \"default-interconnect-55bf8d5cb-q72v2\" (UID: \"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce\") " pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.200871 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.321304 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-g2mnr"] Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.334316 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-g2mnr"] Jan 23 09:27:07 crc kubenswrapper[5117]: I0123 09:27:07.535232 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/default-interconnect-55bf8d5cb-q72v2"] Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.057182 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerStarted","Data":"0500d055056a96da7e299f43d2b30d694f1b21fa63e62fd968d65098fbf5f213"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.060547 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerStarted","Data":"e0c50b4b8bf952be22b42d613d0711cedfb23b65d4955219722dd03c4debe2a6"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.065353 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerStarted","Data":"27d41e17b04cdd9d24eeaefeddb803531a20415592c5a32f1ceb7179da5f5e8b"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.068751 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerStarted","Data":"974f114f6b141b7a999466cd16be66148b1eaf5a3cf26f7ef95228788c6ff424"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.070268 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerStarted","Data":"8f5cb1c75f557cd6ca40c0f54cf5e4813b7fcb581b4bf0a0fc5c6e10300f3620"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.074424 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" event={"ID":"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce","Type":"ContainerStarted","Data":"00150e6c65b61d72cde84c36dff27ebdd5616887da7fd0bdc6bc2d0ddce31d8f"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.074468 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" event={"ID":"c12ea6ce-5bef-42c3-b1ad-71826c74f0ce","Type":"ContainerStarted","Data":"bccd12987e50563bc911b471543968eada22cedd6c97b78b621bca80a657bad0"} Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.220599 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/default-interconnect-55bf8d5cb-q72v2" podStartSLOduration=2.220580831 podStartE2EDuration="2.220580831s" podCreationTimestamp="2026-01-23 09:27:06 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:27:08.215954862 +0000 UTC m=+2039.972079898" watchObservedRunningTime="2026-01-23 09:27:08.220580831 +0000 UTC m=+2039.976705857" Jan 23 09:27:08 crc kubenswrapper[5117]: I0123 09:27:08.785030 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f5adda23-da5c-4bf9-9074-7a4deaab679e" path="/var/lib/kubelet/pods/f5adda23-da5c-4bf9-9074-7a4deaab679e/volumes" Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.083455 5117 generic.go:358] "Generic (PLEG): container finished" podID="23260332-e384-4fc3-bcaf-3034ddf99446" containerID="8f5cb1c75f557cd6ca40c0f54cf5e4813b7fcb581b4bf0a0fc5c6e10300f3620" exitCode=0 Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.083556 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerDied","Data":"8f5cb1c75f557cd6ca40c0f54cf5e4813b7fcb581b4bf0a0fc5c6e10300f3620"} Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.083624 5117 scope.go:117] "RemoveContainer" containerID="46319205ef3d58a016dad79308bb41fa939aeaf366e1f3ac557b9d814bd9c534" Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.084058 5117 scope.go:117] "RemoveContainer" containerID="8f5cb1c75f557cd6ca40c0f54cf5e4813b7fcb581b4bf0a0fc5c6e10300f3620" Jan 23 09:27:09 crc kubenswrapper[5117]: E0123 09:27:09.084412 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4_service-telemetry(23260332-e384-4fc3-bcaf-3034ddf99446)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" podUID="23260332-e384-4fc3-bcaf-3034ddf99446" Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.086000 5117 generic.go:358] "Generic (PLEG): container finished" podID="966f83ef-f268-49d3-acca-8efb03045554" containerID="0500d055056a96da7e299f43d2b30d694f1b21fa63e62fd968d65098fbf5f213" exitCode=0 Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.086137 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerDied","Data":"0500d055056a96da7e299f43d2b30d694f1b21fa63e62fd968d65098fbf5f213"} Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.087044 5117 scope.go:117] "RemoveContainer" containerID="0500d055056a96da7e299f43d2b30d694f1b21fa63e62fd968d65098fbf5f213" Jan 23 09:27:09 crc kubenswrapper[5117]: E0123 09:27:09.087369 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm_service-telemetry(966f83ef-f268-49d3-acca-8efb03045554)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" podUID="966f83ef-f268-49d3-acca-8efb03045554" Jan 23 09:27:09 crc kubenswrapper[5117]: I0123 09:27:09.125699 5117 scope.go:117] "RemoveContainer" containerID="4071710389d79eeada2471d70c63709104a31e91810a9ef95d56dcdfbd6653d8" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.097866 5117 generic.go:358] "Generic (PLEG): container finished" podID="9f58aef0-7287-43a4-b280-bbf650fec94d" containerID="27d41e17b04cdd9d24eeaefeddb803531a20415592c5a32f1ceb7179da5f5e8b" exitCode=0 Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.098259 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerDied","Data":"27d41e17b04cdd9d24eeaefeddb803531a20415592c5a32f1ceb7179da5f5e8b"} Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.098294 5117 scope.go:117] "RemoveContainer" containerID="0c28b97d5921cdaafe4575e2b3a1e197d379e4757c33a4a367b9aa744f634f4f" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.098805 5117 scope.go:117] "RemoveContainer" containerID="27d41e17b04cdd9d24eeaefeddb803531a20415592c5a32f1ceb7179da5f5e8b" Jan 23 09:27:10 crc kubenswrapper[5117]: E0123 09:27:10.099066 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt_service-telemetry(9f58aef0-7287-43a4-b280-bbf650fec94d)\"" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" podUID="9f58aef0-7287-43a4-b280-bbf650fec94d" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.104294 5117 generic.go:358] "Generic (PLEG): container finished" podID="4384e25c-87ff-41fc-8c37-e85b49dc0035" containerID="974f114f6b141b7a999466cd16be66148b1eaf5a3cf26f7ef95228788c6ff424" exitCode=0 Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.104387 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerDied","Data":"974f114f6b141b7a999466cd16be66148b1eaf5a3cf26f7ef95228788c6ff424"} Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.105030 5117 scope.go:117] "RemoveContainer" containerID="974f114f6b141b7a999466cd16be66148b1eaf5a3cf26f7ef95228788c6ff424" Jan 23 09:27:10 crc kubenswrapper[5117]: E0123 09:27:10.105416 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5_service-telemetry(4384e25c-87ff-41fc-8c37-e85b49dc0035)\"" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" podUID="4384e25c-87ff-41fc-8c37-e85b49dc0035" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.110355 5117 scope.go:117] "RemoveContainer" containerID="8f5cb1c75f557cd6ca40c0f54cf5e4813b7fcb581b4bf0a0fc5c6e10300f3620" Jan 23 09:27:10 crc kubenswrapper[5117]: E0123 09:27:10.110810 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4_service-telemetry(23260332-e384-4fc3-bcaf-3034ddf99446)\"" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" podUID="23260332-e384-4fc3-bcaf-3034ddf99446" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.112653 5117 scope.go:117] "RemoveContainer" containerID="0500d055056a96da7e299f43d2b30d694f1b21fa63e62fd968d65098fbf5f213" Jan 23 09:27:10 crc kubenswrapper[5117]: E0123 09:27:10.112997 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm_service-telemetry(966f83ef-f268-49d3-acca-8efb03045554)\"" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" podUID="966f83ef-f268-49d3-acca-8efb03045554" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.116808 5117 generic.go:358] "Generic (PLEG): container finished" podID="79b9017f-cee9-4b84-a6d2-1bd8843c9538" containerID="e0c50b4b8bf952be22b42d613d0711cedfb23b65d4955219722dd03c4debe2a6" exitCode=0 Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.116949 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerDied","Data":"e0c50b4b8bf952be22b42d613d0711cedfb23b65d4955219722dd03c4debe2a6"} Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.117560 5117 scope.go:117] "RemoveContainer" containerID="e0c50b4b8bf952be22b42d613d0711cedfb23b65d4955219722dd03c4debe2a6" Jan 23 09:27:10 crc kubenswrapper[5117]: E0123 09:27:10.119461 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"bridge\" with CrashLoopBackOff: \"back-off 10s restarting failed container=bridge pod=default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6_service-telemetry(79b9017f-cee9-4b84-a6d2-1bd8843c9538)\"" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" podUID="79b9017f-cee9-4b84-a6d2-1bd8843c9538" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.538101 5117 scope.go:117] "RemoveContainer" containerID="130d998bc584f2896d5ed9283b4c1265368dadc293a2ec66826adaabb1d4bb19" Jan 23 09:27:10 crc kubenswrapper[5117]: I0123 09:27:10.819381 5117 scope.go:117] "RemoveContainer" containerID="c12da176769c7a2c1eecd9986bf0084560468917d570bd8a5b3a3bee7d8e16d6" Jan 23 09:27:20 crc kubenswrapper[5117]: I0123 09:27:20.774359 5117 scope.go:117] "RemoveContainer" containerID="27d41e17b04cdd9d24eeaefeddb803531a20415592c5a32f1ceb7179da5f5e8b" Jan 23 09:27:20 crc kubenswrapper[5117]: I0123 09:27:20.774899 5117 scope.go:117] "RemoveContainer" containerID="0500d055056a96da7e299f43d2b30d694f1b21fa63e62fd968d65098fbf5f213" Jan 23 09:27:21 crc kubenswrapper[5117]: I0123 09:27:21.771309 5117 scope.go:117] "RemoveContainer" containerID="974f114f6b141b7a999466cd16be66148b1eaf5a3cf26f7ef95228788c6ff424" Jan 23 09:27:22 crc kubenswrapper[5117]: I0123 09:27:22.771407 5117 scope.go:117] "RemoveContainer" containerID="e0c50b4b8bf952be22b42d613d0711cedfb23b65d4955219722dd03c4debe2a6" Jan 23 09:27:23 crc kubenswrapper[5117]: I0123 09:27:23.217459 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6" event={"ID":"79b9017f-cee9-4b84-a6d2-1bd8843c9538","Type":"ContainerStarted","Data":"8014a4b24a78e46e11bdcbbff910d965ef56f7d911c457f3e4386e2419e0229b"} Jan 23 09:27:23 crc kubenswrapper[5117]: I0123 09:27:23.221067 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt" event={"ID":"9f58aef0-7287-43a4-b280-bbf650fec94d","Type":"ContainerStarted","Data":"f0758f2139f556e7ed9c2b781479d1b0591a429db1c93c6269afc47a33765c7f"} Jan 23 09:27:23 crc kubenswrapper[5117]: I0123 09:27:23.224414 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5" event={"ID":"4384e25c-87ff-41fc-8c37-e85b49dc0035","Type":"ContainerStarted","Data":"2e332be6264ecc2c3d1cc8f8fa107c7e8597082491ca499d6be3a103ffddcaf6"} Jan 23 09:27:23 crc kubenswrapper[5117]: I0123 09:27:23.226311 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm" event={"ID":"966f83ef-f268-49d3-acca-8efb03045554","Type":"ContainerStarted","Data":"666d6fa50d9a8a9ddb0e49fe9ac646558569203f6eaf9357c8aa4f391b0351a9"} Jan 23 09:27:24 crc kubenswrapper[5117]: I0123 09:27:24.770941 5117 scope.go:117] "RemoveContainer" containerID="8f5cb1c75f557cd6ca40c0f54cf5e4813b7fcb581b4bf0a0fc5c6e10300f3620" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.257845 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4" event={"ID":"23260332-e384-4fc3-bcaf-3034ddf99446","Type":"ContainerStarted","Data":"3f165e962f1edfa9c41e077025590857c81f83a23dc78b3240c261d5624bf0c5"} Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.603265 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/certified-operators-8wgz6"] Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.613633 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.618490 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wgz6"] Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.687709 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kndkm\" (UniqueName: \"kubernetes.io/projected/a3b15e06-8263-4e39-85ee-b0a710a74887-kube-api-access-kndkm\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.687852 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-utilities\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.687953 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-catalog-content\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.789400 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-catalog-content\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.789478 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kndkm\" (UniqueName: \"kubernetes.io/projected/a3b15e06-8263-4e39-85ee-b0a710a74887-kube-api-access-kndkm\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.789565 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-utilities\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.790121 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-utilities\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.790410 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-catalog-content\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.822618 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kndkm\" (UniqueName: \"kubernetes.io/projected/a3b15e06-8263-4e39-85ee-b0a710a74887-kube-api-access-kndkm\") pod \"certified-operators-8wgz6\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:25 crc kubenswrapper[5117]: I0123 09:27:25.932306 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:26 crc kubenswrapper[5117]: I0123 09:27:26.459379 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/certified-operators-8wgz6"] Jan 23 09:27:26 crc kubenswrapper[5117]: W0123 09:27:26.462195 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-poda3b15e06_8263_4e39_85ee_b0a710a74887.slice/crio-b1348a5c70c291278ab059dffbc8431a45d603be90052d0d7ada920a4062b04d WatchSource:0}: Error finding container b1348a5c70c291278ab059dffbc8431a45d603be90052d0d7ada920a4062b04d: Status 404 returned error can't find the container with id b1348a5c70c291278ab059dffbc8431a45d603be90052d0d7ada920a4062b04d Jan 23 09:27:27 crc kubenswrapper[5117]: I0123 09:27:27.274209 5117 generic.go:358] "Generic (PLEG): container finished" podID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerID="c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b" exitCode=0 Jan 23 09:27:27 crc kubenswrapper[5117]: I0123 09:27:27.274308 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerDied","Data":"c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b"} Jan 23 09:27:27 crc kubenswrapper[5117]: I0123 09:27:27.274674 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerStarted","Data":"b1348a5c70c291278ab059dffbc8431a45d603be90052d0d7ada920a4062b04d"} Jan 23 09:27:28 crc kubenswrapper[5117]: I0123 09:27:28.285486 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerStarted","Data":"f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f"} Jan 23 09:27:29 crc kubenswrapper[5117]: I0123 09:27:29.300689 5117 generic.go:358] "Generic (PLEG): container finished" podID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerID="f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f" exitCode=0 Jan 23 09:27:29 crc kubenswrapper[5117]: I0123 09:27:29.300869 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerDied","Data":"f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f"} Jan 23 09:27:31 crc kubenswrapper[5117]: I0123 09:27:31.334384 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerStarted","Data":"f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a"} Jan 23 09:27:31 crc kubenswrapper[5117]: I0123 09:27:31.361492 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/certified-operators-8wgz6" podStartSLOduration=5.640666366 podStartE2EDuration="6.361472888s" podCreationTimestamp="2026-01-23 09:27:25 +0000 UTC" firstStartedPulling="2026-01-23 09:27:27.275425645 +0000 UTC m=+2059.031550671" lastFinishedPulling="2026-01-23 09:27:27.996232167 +0000 UTC m=+2059.752357193" observedRunningTime="2026-01-23 09:27:31.354776681 +0000 UTC m=+2063.110901727" watchObservedRunningTime="2026-01-23 09:27:31.361472888 +0000 UTC m=+2063.117597914" Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.169889 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/qdr-test"] Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.868271 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.868408 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.870972 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"service-telemetry\"/\"default-interconnect-selfsigned\"" Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.871576 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"qdr-test-config\"" Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.933172 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.933220 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:35 crc kubenswrapper[5117]: I0123 09:27:35.976312 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.041713 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-vpnvr\" (UniqueName: \"kubernetes.io/projected/bbafda5a-c4f1-4a92-b261-66678e272673-kube-api-access-vpnvr\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.041875 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/bbafda5a-c4f1-4a92-b261-66678e272673-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.041962 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/bbafda5a-c4f1-4a92-b261-66678e272673-qdr-test-config\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.143395 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/bbafda5a-c4f1-4a92-b261-66678e272673-qdr-test-config\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.143509 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-vpnvr\" (UniqueName: \"kubernetes.io/projected/bbafda5a-c4f1-4a92-b261-66678e272673-kube-api-access-vpnvr\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.143598 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/bbafda5a-c4f1-4a92-b261-66678e272673-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.144626 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"qdr-test-config\" (UniqueName: \"kubernetes.io/configmap/bbafda5a-c4f1-4a92-b261-66678e272673-qdr-test-config\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.150839 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"default-interconnect-selfsigned-cert\" (UniqueName: \"kubernetes.io/secret/bbafda5a-c4f1-4a92-b261-66678e272673-default-interconnect-selfsigned-cert\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.165652 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-vpnvr\" (UniqueName: \"kubernetes.io/projected/bbafda5a-c4f1-4a92-b261-66678e272673-kube-api-access-vpnvr\") pod \"qdr-test\" (UID: \"bbafda5a-c4f1-4a92-b261-66678e272673\") " pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.189501 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/qdr-test" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.442349 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.495335 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8wgz6"] Jan 23 09:27:36 crc kubenswrapper[5117]: I0123 09:27:36.641493 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/qdr-test"] Jan 23 09:27:37 crc kubenswrapper[5117]: I0123 09:27:37.399019 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"bbafda5a-c4f1-4a92-b261-66678e272673","Type":"ContainerStarted","Data":"79cb3f81dc99a28a5186e369ac2278562b1c3bb1952773bb01606850a2c523ca"} Jan 23 09:27:38 crc kubenswrapper[5117]: I0123 09:27:38.408739 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/certified-operators-8wgz6" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="registry-server" containerID="cri-o://f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a" gracePeriod=2 Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.225332 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.287548 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-catalog-content\") pod \"a3b15e06-8263-4e39-85ee-b0a710a74887\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.287638 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-utilities\") pod \"a3b15e06-8263-4e39-85ee-b0a710a74887\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.287800 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kndkm\" (UniqueName: \"kubernetes.io/projected/a3b15e06-8263-4e39-85ee-b0a710a74887-kube-api-access-kndkm\") pod \"a3b15e06-8263-4e39-85ee-b0a710a74887\" (UID: \"a3b15e06-8263-4e39-85ee-b0a710a74887\") " Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.289754 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-utilities" (OuterVolumeSpecName: "utilities") pod "a3b15e06-8263-4e39-85ee-b0a710a74887" (UID: "a3b15e06-8263-4e39-85ee-b0a710a74887"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.292870 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/a3b15e06-8263-4e39-85ee-b0a710a74887-kube-api-access-kndkm" (OuterVolumeSpecName: "kube-api-access-kndkm") pod "a3b15e06-8263-4e39-85ee-b0a710a74887" (UID: "a3b15e06-8263-4e39-85ee-b0a710a74887"). InnerVolumeSpecName "kube-api-access-kndkm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.323448 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "a3b15e06-8263-4e39-85ee-b0a710a74887" (UID: "a3b15e06-8263-4e39-85ee-b0a710a74887"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.389274 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.389312 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/a3b15e06-8263-4e39-85ee-b0a710a74887-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.389325 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kndkm\" (UniqueName: \"kubernetes.io/projected/a3b15e06-8263-4e39-85ee-b0a710a74887-kube-api-access-kndkm\") on node \"crc\" DevicePath \"\"" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.417005 5117 generic.go:358] "Generic (PLEG): container finished" podID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerID="f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a" exitCode=0 Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.417116 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerDied","Data":"f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a"} Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.417156 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/certified-operators-8wgz6" event={"ID":"a3b15e06-8263-4e39-85ee-b0a710a74887","Type":"ContainerDied","Data":"b1348a5c70c291278ab059dffbc8431a45d603be90052d0d7ada920a4062b04d"} Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.417175 5117 scope.go:117] "RemoveContainer" containerID="f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.417306 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/certified-operators-8wgz6" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.452658 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/certified-operators-8wgz6"] Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.453936 5117 scope.go:117] "RemoveContainer" containerID="f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.461562 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/certified-operators-8wgz6"] Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.476624 5117 scope.go:117] "RemoveContainer" containerID="c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.511036 5117 scope.go:117] "RemoveContainer" containerID="f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a" Jan 23 09:27:39 crc kubenswrapper[5117]: E0123 09:27:39.511626 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a\": container with ID starting with f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a not found: ID does not exist" containerID="f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.511655 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a"} err="failed to get container status \"f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a\": rpc error: code = NotFound desc = could not find container \"f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a\": container with ID starting with f484c5841b648e1ec2c68212b269e23b7b9551594fffac12de924616de72262a not found: ID does not exist" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.511676 5117 scope.go:117] "RemoveContainer" containerID="f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f" Jan 23 09:27:39 crc kubenswrapper[5117]: E0123 09:27:39.512174 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f\": container with ID starting with f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f not found: ID does not exist" containerID="f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.512199 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f"} err="failed to get container status \"f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f\": rpc error: code = NotFound desc = could not find container \"f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f\": container with ID starting with f6b79de9669819ae5f1ccfc657f471bdf83a918b6f23677585ee91e4997f399f not found: ID does not exist" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.512213 5117 scope.go:117] "RemoveContainer" containerID="c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b" Jan 23 09:27:39 crc kubenswrapper[5117]: E0123 09:27:39.512492 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b\": container with ID starting with c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b not found: ID does not exist" containerID="c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b" Jan 23 09:27:39 crc kubenswrapper[5117]: I0123 09:27:39.512511 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b"} err="failed to get container status \"c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b\": rpc error: code = NotFound desc = could not find container \"c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b\": container with ID starting with c2e010f7a2be17dece12b84b24fddbe4661e2bd922b28681a612bbb997073b3b not found: ID does not exist" Jan 23 09:27:40 crc kubenswrapper[5117]: I0123 09:27:40.779927 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" path="/var/lib/kubelet/pods/a3b15e06-8263-4e39-85ee-b0a710a74887/volumes" Jan 23 09:27:45 crc kubenswrapper[5117]: I0123 09:27:45.063820 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:27:45 crc kubenswrapper[5117]: I0123 09:27:45.064224 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.496896 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/qdr-test" event={"ID":"bbafda5a-c4f1-4a92-b261-66678e272673","Type":"ContainerStarted","Data":"8e15c5d0768b966ab0bd6c8a6128ab1ba926a492cf0f4a239ad152814a455a78"} Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.521331 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/qdr-test" podStartSLOduration=2.872908176 podStartE2EDuration="11.521301046s" podCreationTimestamp="2026-01-23 09:27:35 +0000 UTC" firstStartedPulling="2026-01-23 09:27:36.663831534 +0000 UTC m=+2068.419956560" lastFinishedPulling="2026-01-23 09:27:45.312224404 +0000 UTC m=+2077.068349430" observedRunningTime="2026-01-23 09:27:46.513428826 +0000 UTC m=+2078.269553852" watchObservedRunningTime="2026-01-23 09:27:46.521301046 +0000 UTC m=+2078.277426072" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.812846 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-zp2vw"] Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.813837 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="registry-server" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.813857 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="registry-server" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.813902 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="extract-utilities" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.813911 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="extract-utilities" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.813923 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="extract-content" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.813929 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="extract-content" Jan 23 09:27:46 crc kubenswrapper[5117]: I0123 09:27:46.814073 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="a3b15e06-8263-4e39-85ee-b0a710a74887" containerName="registry-server" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.415699 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-zp2vw"] Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.415963 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/curl"] Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.415987 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.418348 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.418414 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.418780 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.418848 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.418964 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.434237 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509363 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-config\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509412 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-sensubility-config\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509435 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-healthcheck-log\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509503 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-qz5sq\" (UniqueName: \"kubernetes.io/projected/e9506f8f-d145-44e0-903b-fbd429800467-kube-api-access-qz5sq\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509565 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-publisher\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509598 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.509650 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610640 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-config\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610701 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-sensubility-config\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610735 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-healthcheck-log\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610826 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-qz5sq\" (UniqueName: \"kubernetes.io/projected/e9506f8f-d145-44e0-903b-fbd429800467-kube-api-access-qz5sq\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610867 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-publisher\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610907 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.610948 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.611942 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-config\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.611982 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-sensubility-config\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.612182 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.612453 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-publisher\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.612478 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.612603 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-healthcheck-log\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.643309 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-qz5sq\" (UniqueName: \"kubernetes.io/projected/e9506f8f-d145-44e0-903b-fbd429800467-kube-api-access-qz5sq\") pod \"stf-smoketest-smoke1-zp2vw\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.650568 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.650817 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.712098 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-nbrbm\" (UniqueName: \"kubernetes.io/projected/8565700c-1f7a-4fbb-ab87-0829b836aa03-kube-api-access-nbrbm\") pod \"curl\" (UID: \"8565700c-1f7a-4fbb-ab87-0829b836aa03\") " pod="service-telemetry/curl" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.737105 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.817668 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-nbrbm\" (UniqueName: \"kubernetes.io/projected/8565700c-1f7a-4fbb-ab87-0829b836aa03-kube-api-access-nbrbm\") pod \"curl\" (UID: \"8565700c-1f7a-4fbb-ab87-0829b836aa03\") " pod="service-telemetry/curl" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.845897 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-nbrbm\" (UniqueName: \"kubernetes.io/projected/8565700c-1f7a-4fbb-ab87-0829b836aa03-kube-api-access-nbrbm\") pod \"curl\" (UID: \"8565700c-1f7a-4fbb-ab87-0829b836aa03\") " pod="service-telemetry/curl" Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.933331 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-zp2vw"] Jan 23 09:27:47 crc kubenswrapper[5117]: I0123 09:27:47.977630 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 09:27:48 crc kubenswrapper[5117]: I0123 09:27:48.212600 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/curl"] Jan 23 09:27:48 crc kubenswrapper[5117]: I0123 09:27:48.513469 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" event={"ID":"e9506f8f-d145-44e0-903b-fbd429800467","Type":"ContainerStarted","Data":"d33168c0f8d32dabe1f19a799eaf3820dea2544aff9560821127b2fb67b44295"} Jan 23 09:27:48 crc kubenswrapper[5117]: I0123 09:27:48.514857 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"8565700c-1f7a-4fbb-ab87-0829b836aa03","Type":"ContainerStarted","Data":"1953ff23175769d9059f0a6cc04ca7983a99b607f4104d95eff32ef66140052e"} Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.139366 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486008-wrnms"] Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.157353 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486008-wrnms"] Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.157524 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.160275 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.160821 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.160889 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.255722 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-zh95z\" (UniqueName: \"kubernetes.io/projected/7050f784-d7f3-4287-b267-292ddd8a13f1-kube-api-access-zh95z\") pod \"auto-csr-approver-29486008-wrnms\" (UID: \"7050f784-d7f3-4287-b267-292ddd8a13f1\") " pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.357734 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-zh95z\" (UniqueName: \"kubernetes.io/projected/7050f784-d7f3-4287-b267-292ddd8a13f1-kube-api-access-zh95z\") pod \"auto-csr-approver-29486008-wrnms\" (UID: \"7050f784-d7f3-4287-b267-292ddd8a13f1\") " pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.385162 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-zh95z\" (UniqueName: \"kubernetes.io/projected/7050f784-d7f3-4287-b267-292ddd8a13f1-kube-api-access-zh95z\") pod \"auto-csr-approver-29486008-wrnms\" (UID: \"7050f784-d7f3-4287-b267-292ddd8a13f1\") " pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:00 crc kubenswrapper[5117]: I0123 09:28:00.479282 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:02 crc kubenswrapper[5117]: I0123 09:28:02.033010 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486008-wrnms"] Jan 23 09:28:02 crc kubenswrapper[5117]: I0123 09:28:02.631702 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486008-wrnms" event={"ID":"7050f784-d7f3-4287-b267-292ddd8a13f1","Type":"ContainerStarted","Data":"fdc530f14420f8633cf86e2227ea8b683b1cc50d4ccaf46045b1d2e1132e9faa"} Jan 23 09:28:02 crc kubenswrapper[5117]: I0123 09:28:02.633207 5117 generic.go:358] "Generic (PLEG): container finished" podID="8565700c-1f7a-4fbb-ab87-0829b836aa03" containerID="f24ca2ac0b1cfedd9be8514adb86024ad648839b55787a041f5cfcf0a91fb270" exitCode=0 Jan 23 09:28:02 crc kubenswrapper[5117]: I0123 09:28:02.633256 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"8565700c-1f7a-4fbb-ab87-0829b836aa03","Type":"ContainerDied","Data":"f24ca2ac0b1cfedd9be8514adb86024ad648839b55787a041f5cfcf0a91fb270"} Jan 23 09:28:02 crc kubenswrapper[5117]: I0123 09:28:02.635350 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" event={"ID":"e9506f8f-d145-44e0-903b-fbd429800467","Type":"ContainerStarted","Data":"3c6c086404aedb177540fcd07371348c184a3568615538756a3fd68b41744594"} Jan 23 09:28:03 crc kubenswrapper[5117]: I0123 09:28:03.645607 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486008-wrnms" event={"ID":"7050f784-d7f3-4287-b267-292ddd8a13f1","Type":"ContainerStarted","Data":"d2409fcace37f4e306b924d2bd79bfdfab8005f355384190d239039f69e585a9"} Jan 23 09:28:03 crc kubenswrapper[5117]: I0123 09:28:03.664354 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29486008-wrnms" podStartSLOduration=2.771326979 podStartE2EDuration="3.66433548s" podCreationTimestamp="2026-01-23 09:28:00 +0000 UTC" firstStartedPulling="2026-01-23 09:28:02.048162132 +0000 UTC m=+2093.804287168" lastFinishedPulling="2026-01-23 09:28:02.941170643 +0000 UTC m=+2094.697295669" observedRunningTime="2026-01-23 09:28:03.656964475 +0000 UTC m=+2095.413089521" watchObservedRunningTime="2026-01-23 09:28:03.66433548 +0000 UTC m=+2095.420460506" Jan 23 09:28:03 crc kubenswrapper[5117]: I0123 09:28:03.982654 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.122836 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-nbrbm\" (UniqueName: \"kubernetes.io/projected/8565700c-1f7a-4fbb-ab87-0829b836aa03-kube-api-access-nbrbm\") pod \"8565700c-1f7a-4fbb-ab87-0829b836aa03\" (UID: \"8565700c-1f7a-4fbb-ab87-0829b836aa03\") " Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.130817 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/8565700c-1f7a-4fbb-ab87-0829b836aa03-kube-api-access-nbrbm" (OuterVolumeSpecName: "kube-api-access-nbrbm") pod "8565700c-1f7a-4fbb-ab87-0829b836aa03" (UID: "8565700c-1f7a-4fbb-ab87-0829b836aa03"). InnerVolumeSpecName "kube-api-access-nbrbm". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.189247 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_curl_8565700c-1f7a-4fbb-ab87-0829b836aa03/curl/0.log" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.225800 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-nbrbm\" (UniqueName: \"kubernetes.io/projected/8565700c-1f7a-4fbb-ab87-0829b836aa03-kube-api-access-nbrbm\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.500385 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-vrgvk_0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9/prometheus-webhook-snmp/0.log" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.655476 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/curl" event={"ID":"8565700c-1f7a-4fbb-ab87-0829b836aa03","Type":"ContainerDied","Data":"1953ff23175769d9059f0a6cc04ca7983a99b607f4104d95eff32ef66140052e"} Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.655581 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="1953ff23175769d9059f0a6cc04ca7983a99b607f4104d95eff32ef66140052e" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.655493 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/curl" Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.658840 5117 generic.go:358] "Generic (PLEG): container finished" podID="7050f784-d7f3-4287-b267-292ddd8a13f1" containerID="d2409fcace37f4e306b924d2bd79bfdfab8005f355384190d239039f69e585a9" exitCode=0 Jan 23 09:28:04 crc kubenswrapper[5117]: I0123 09:28:04.658949 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486008-wrnms" event={"ID":"7050f784-d7f3-4287-b267-292ddd8a13f1","Type":"ContainerDied","Data":"d2409fcace37f4e306b924d2bd79bfdfab8005f355384190d239039f69e585a9"} Jan 23 09:28:15 crc kubenswrapper[5117]: I0123 09:28:15.063191 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:28:15 crc kubenswrapper[5117]: I0123 09:28:15.063809 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:28:19 crc kubenswrapper[5117]: I0123 09:28:19.935829 5117 scope.go:117] "RemoveContainer" containerID="0205e928a4eefcc626b5f331383ff7fdc89d067e29bed8bf9151483455cdb845" Jan 23 09:28:34 crc kubenswrapper[5117]: I0123 09:28:34.637652 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-vrgvk_0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9/prometheus-webhook-snmp/0.log" Jan 23 09:28:36 crc kubenswrapper[5117]: I0123 09:28:36.943556 5117 generic.go:358] "Generic (PLEG): container finished" podID="e9506f8f-d145-44e0-903b-fbd429800467" containerID="3c6c086404aedb177540fcd07371348c184a3568615538756a3fd68b41744594" exitCode=1 Jan 23 09:28:36 crc kubenswrapper[5117]: I0123 09:28:36.943669 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" event={"ID":"e9506f8f-d145-44e0-903b-fbd429800467","Type":"ContainerDied","Data":"3c6c086404aedb177540fcd07371348c184a3568615538756a3fd68b41744594"} Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.688969 5117 scope.go:117] "RemoveContainer" containerID="60ee385069963db5254faf81bc0ad5b8338e82036d199c1edbbd7022ad057d35" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.756166 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.796542 5117 scope.go:117] "RemoveContainer" containerID="5a6b35628be6a3de9fb997aad564ec208577fa5d9e141fbbb54587149a2cf183" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.825932 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.826359 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.833988 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.834953 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.874761 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-zh95z\" (UniqueName: \"kubernetes.io/projected/7050f784-d7f3-4287-b267-292ddd8a13f1-kube-api-access-zh95z\") pod \"7050f784-d7f3-4287-b267-292ddd8a13f1\" (UID: \"7050f784-d7f3-4287-b267-292ddd8a13f1\") " Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.881729 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/7050f784-d7f3-4287-b267-292ddd8a13f1-kube-api-access-zh95z" (OuterVolumeSpecName: "kube-api-access-zh95z") pod "7050f784-d7f3-4287-b267-292ddd8a13f1" (UID: "7050f784-d7f3-4287-b267-292ddd8a13f1"). InnerVolumeSpecName "kube-api-access-zh95z". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.892063 5117 scope.go:117] "RemoveContainer" containerID="c547d24974a079a9f3f2620f90b24be6b18da5c995a5b82569ed04efcea2d3d9" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.955292 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486008-wrnms" event={"ID":"7050f784-d7f3-4287-b267-292ddd8a13f1","Type":"ContainerDied","Data":"fdc530f14420f8633cf86e2227ea8b683b1cc50d4ccaf46045b1d2e1132e9faa"} Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.955353 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="fdc530f14420f8633cf86e2227ea8b683b1cc50d4ccaf46045b1d2e1132e9faa" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.955500 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486008-wrnms" Jan 23 09:28:37 crc kubenswrapper[5117]: I0123 09:28:37.978377 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-zh95z\" (UniqueName: \"kubernetes.io/projected/7050f784-d7f3-4287-b267-292ddd8a13f1-kube-api-access-zh95z\") on node \"crc\" DevicePath \"\"" Jan 23 09:28:38 crc kubenswrapper[5117]: I0123 09:28:38.823040 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29486002-ddpmt"] Jan 23 09:28:38 crc kubenswrapper[5117]: I0123 09:28:38.829563 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29486002-ddpmt"] Jan 23 09:28:40 crc kubenswrapper[5117]: I0123 09:28:40.797072 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="4f0c8992-ec3a-49be-9dfd-2801a6083d7a" path="/var/lib/kubelet/pods/4f0c8992-ec3a-49be-9dfd-2801a6083d7a/volumes" Jan 23 09:28:45 crc kubenswrapper[5117]: I0123 09:28:45.063305 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:28:45 crc kubenswrapper[5117]: I0123 09:28:45.063463 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:28:45 crc kubenswrapper[5117]: I0123 09:28:45.063546 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:28:45 crc kubenswrapper[5117]: I0123 09:28:45.064503 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"994fa97d1cb60133ddd28a5a7c053d2a40f4fd74acc6d90fde40e86efd34b82f"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:28:45 crc kubenswrapper[5117]: I0123 09:28:45.064581 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://994fa97d1cb60133ddd28a5a7c053d2a40f4fd74acc6d90fde40e86efd34b82f" gracePeriod=600 Jan 23 09:28:46 crc kubenswrapper[5117]: I0123 09:28:46.026491 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" event={"ID":"e9506f8f-d145-44e0-903b-fbd429800467","Type":"ContainerStarted","Data":"f2e1992a2e49865def3e664ac87b26b1831d6e5bdb62ca8c865de6b9532faa37"} Jan 23 09:28:46 crc kubenswrapper[5117]: I0123 09:28:46.026831 5117 scope.go:117] "RemoveContainer" containerID="3c6c086404aedb177540fcd07371348c184a3568615538756a3fd68b41744594" Jan 23 09:28:46 crc kubenswrapper[5117]: I0123 09:28:46.031576 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="994fa97d1cb60133ddd28a5a7c053d2a40f4fd74acc6d90fde40e86efd34b82f" exitCode=0 Jan 23 09:28:46 crc kubenswrapper[5117]: I0123 09:28:46.031792 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"994fa97d1cb60133ddd28a5a7c053d2a40f4fd74acc6d90fde40e86efd34b82f"} Jan 23 09:28:46 crc kubenswrapper[5117]: I0123 09:28:46.031837 5117 scope.go:117] "RemoveContainer" containerID="29c56ce7380d2c172734bd3a9ef324ea9898f9f593df8644c22095838e17b2d9" Jan 23 09:28:47 crc kubenswrapper[5117]: I0123 09:28:47.041702 5117 scope.go:117] "RemoveContainer" containerID="3c6c086404aedb177540fcd07371348c184a3568615538756a3fd68b41744594" Jan 23 09:28:47 crc kubenswrapper[5117]: I0123 09:28:47.042176 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerStarted","Data":"44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62"} Jan 23 09:29:17 crc kubenswrapper[5117]: I0123 09:29:17.266340 5117 generic.go:358] "Generic (PLEG): container finished" podID="e9506f8f-d145-44e0-903b-fbd429800467" containerID="f2e1992a2e49865def3e664ac87b26b1831d6e5bdb62ca8c865de6b9532faa37" exitCode=0 Jan 23 09:29:17 crc kubenswrapper[5117]: I0123 09:29:17.266451 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" event={"ID":"e9506f8f-d145-44e0-903b-fbd429800467","Type":"ContainerDied","Data":"f2e1992a2e49865def3e664ac87b26b1831d6e5bdb62ca8c865de6b9532faa37"} Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.552332 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.633656 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-config\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.633988 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-qz5sq\" (UniqueName: \"kubernetes.io/projected/e9506f8f-d145-44e0-903b-fbd429800467-kube-api-access-qz5sq\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.634067 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-healthcheck-log\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.634238 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-publisher\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.634284 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-entrypoint-script\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.634323 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-entrypoint-script\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.634920 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-sensubility-config\") pod \"e9506f8f-d145-44e0-903b-fbd429800467\" (UID: \"e9506f8f-d145-44e0-903b-fbd429800467\") " Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.644671 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e9506f8f-d145-44e0-903b-fbd429800467-kube-api-access-qz5sq" (OuterVolumeSpecName: "kube-api-access-qz5sq") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "kube-api-access-qz5sq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.653668 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.654634 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.655857 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.656341 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.663196 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.663514 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "e9506f8f-d145-44e0-903b-fbd429800467" (UID: "e9506f8f-d145-44e0-903b-fbd429800467"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744256 5117 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744295 5117 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744308 5117 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744320 5117 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744333 5117 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744345 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-qz5sq\" (UniqueName: \"kubernetes.io/projected/e9506f8f-d145-44e0-903b-fbd429800467-kube-api-access-qz5sq\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:18 crc kubenswrapper[5117]: I0123 09:29:18.744356 5117 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/e9506f8f-d145-44e0-903b-fbd429800467-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 23 09:29:19 crc kubenswrapper[5117]: I0123 09:29:19.287311 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" event={"ID":"e9506f8f-d145-44e0-903b-fbd429800467","Type":"ContainerDied","Data":"d33168c0f8d32dabe1f19a799eaf3820dea2544aff9560821127b2fb67b44295"} Jan 23 09:29:19 crc kubenswrapper[5117]: I0123 09:29:19.287355 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="d33168c0f8d32dabe1f19a799eaf3820dea2544aff9560821127b2fb67b44295" Jan 23 09:29:19 crc kubenswrapper[5117]: I0123 09:29:19.287428 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-zp2vw" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.023360 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hn2w6"] Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024517 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="8565700c-1f7a-4fbb-ab87-0829b836aa03" containerName="curl" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024530 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="8565700c-1f7a-4fbb-ab87-0829b836aa03" containerName="curl" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024544 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9506f8f-d145-44e0-903b-fbd429800467" containerName="smoketest-collectd" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024549 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9506f8f-d145-44e0-903b-fbd429800467" containerName="smoketest-collectd" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024582 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="7050f784-d7f3-4287-b267-292ddd8a13f1" containerName="oc" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024588 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="7050f784-d7f3-4287-b267-292ddd8a13f1" containerName="oc" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024599 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e9506f8f-d145-44e0-903b-fbd429800467" containerName="smoketest-ceilometer" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024604 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e9506f8f-d145-44e0-903b-fbd429800467" containerName="smoketest-ceilometer" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024718 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="8565700c-1f7a-4fbb-ab87-0829b836aa03" containerName="curl" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024726 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e9506f8f-d145-44e0-903b-fbd429800467" containerName="smoketest-ceilometer" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024738 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="7050f784-d7f3-4287-b267-292ddd8a13f1" containerName="oc" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.024749 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e9506f8f-d145-44e0-903b-fbd429800467" containerName="smoketest-collectd" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.995831 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hn2w6"] Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.995992 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.998975 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-healthcheck-log\"" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.999325 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-publisher\"" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.999479 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-ceilometer-entrypoint-script\"" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.999647 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-config\"" Jan 23 09:29:27 crc kubenswrapper[5117]: I0123 09:29:27.999920 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-collectd-entrypoint-script\"" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.000197 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"service-telemetry\"/\"stf-smoketest-sensubility-config\"" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.094708 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.094755 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-sensubility-config\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.094790 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-config\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.094861 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.095032 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-healthcheck-log\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.095075 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.095106 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kgcxw\" (UniqueName: \"kubernetes.io/projected/aef78462-cc9e-4454-b6a5-1ea7959874bb-kube-api-access-kgcxw\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196562 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196658 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-healthcheck-log\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196693 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196731 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kgcxw\" (UniqueName: \"kubernetes.io/projected/aef78462-cc9e-4454-b6a5-1ea7959874bb-kube-api-access-kgcxw\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196843 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196869 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-sensubility-config\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.196897 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-config\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.197886 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-entrypoint-script\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.198917 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-publisher\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.199266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-config\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.199266 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-sensubility-config\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.199452 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-healthcheck-log\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.199524 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-entrypoint-script\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.221817 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kgcxw\" (UniqueName: \"kubernetes.io/projected/aef78462-cc9e-4454-b6a5-1ea7959874bb-kube-api-access-kgcxw\") pod \"stf-smoketest-smoke1-hn2w6\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.323726 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:29:28 crc kubenswrapper[5117]: I0123 09:29:28.793853 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/stf-smoketest-smoke1-hn2w6"] Jan 23 09:29:29 crc kubenswrapper[5117]: I0123 09:29:29.376504 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" event={"ID":"aef78462-cc9e-4454-b6a5-1ea7959874bb","Type":"ContainerStarted","Data":"4900a58f67e3f22eac850ced838dac2e4c2e500fe3faf3d9e2f20b290ac51bca"} Jan 23 09:29:29 crc kubenswrapper[5117]: I0123 09:29:29.376846 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" event={"ID":"aef78462-cc9e-4454-b6a5-1ea7959874bb","Type":"ContainerStarted","Data":"7857046de8b52bfb5ea40702ee107fb411ea507bb513f2e139567212e967335a"} Jan 23 09:29:31 crc kubenswrapper[5117]: I0123 09:29:31.393778 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" event={"ID":"aef78462-cc9e-4454-b6a5-1ea7959874bb","Type":"ContainerStarted","Data":"c430296bd8ba475224735dc09cbcec0f002cd47422ef4fb5344cbc5fe5f91fe5"} Jan 23 09:29:31 crc kubenswrapper[5117]: I0123 09:29:31.419109 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" podStartSLOduration=4.41909124 podStartE2EDuration="4.41909124s" podCreationTimestamp="2026-01-23 09:29:27 +0000 UTC" firstStartedPulling="0001-01-01 00:00:00 +0000 UTC" lastFinishedPulling="0001-01-01 00:00:00 +0000 UTC" observedRunningTime="2026-01-23 09:29:31.416757665 +0000 UTC m=+2183.172882701" watchObservedRunningTime="2026-01-23 09:29:31.41909124 +0000 UTC m=+2183.175216266" Jan 23 09:29:38 crc kubenswrapper[5117]: I0123 09:29:38.001852 5117 scope.go:117] "RemoveContainer" containerID="bbe2a9fb45a14a8d8d1ee7a74daef83e236d3110ce459b6d6de8a0fe4514f6c1" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.152205 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486010-hnvf5"] Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.375767 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj"] Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.375974 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.378721 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.378993 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.380255 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.457888 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486010-hnvf5"] Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.457925 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj"] Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.458058 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.462602 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-config\"" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.463223 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-operator-lifecycle-manager\"/\"collect-profiles-dockercfg-vfqp6\"" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.550226 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-secret-volume\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.550434 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-ssmnv\" (UniqueName: \"kubernetes.io/projected/099837f0-dc7e-4b26-9605-a03e93899b1b-kube-api-access-ssmnv\") pod \"auto-csr-approver-29486010-hnvf5\" (UID: \"099837f0-dc7e-4b26-9605-a03e93899b1b\") " pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.550607 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-config-volume\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.550677 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-7b2xb\" (UniqueName: \"kubernetes.io/projected/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-kube-api-access-7b2xb\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.652484 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-secret-volume\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.652594 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-ssmnv\" (UniqueName: \"kubernetes.io/projected/099837f0-dc7e-4b26-9605-a03e93899b1b-kube-api-access-ssmnv\") pod \"auto-csr-approver-29486010-hnvf5\" (UID: \"099837f0-dc7e-4b26-9605-a03e93899b1b\") " pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.652651 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-config-volume\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.652688 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-7b2xb\" (UniqueName: \"kubernetes.io/projected/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-kube-api-access-7b2xb\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.654111 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-config-volume\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.659981 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-secret-volume\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.671396 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-7b2xb\" (UniqueName: \"kubernetes.io/projected/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-kube-api-access-7b2xb\") pod \"collect-profiles-29486010-xq5sj\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.675004 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-ssmnv\" (UniqueName: \"kubernetes.io/projected/099837f0-dc7e-4b26-9605-a03e93899b1b-kube-api-access-ssmnv\") pod \"auto-csr-approver-29486010-hnvf5\" (UID: \"099837f0-dc7e-4b26-9605-a03e93899b1b\") " pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.694355 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.780640 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:00 crc kubenswrapper[5117]: W0123 09:30:00.906146 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-besteffort.slice/kubepods-besteffort-pod099837f0_dc7e_4b26_9605_a03e93899b1b.slice/crio-2ead5977b6d1e436efbded07ed7419352cb407806033c36961afd6e19d5c2315 WatchSource:0}: Error finding container 2ead5977b6d1e436efbded07ed7419352cb407806033c36961afd6e19d5c2315: Status 404 returned error can't find the container with id 2ead5977b6d1e436efbded07ed7419352cb407806033c36961afd6e19d5c2315 Jan 23 09:30:00 crc kubenswrapper[5117]: I0123 09:30:00.906601 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486010-hnvf5"] Jan 23 09:30:01 crc kubenswrapper[5117]: I0123 09:30:01.236313 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj"] Jan 23 09:30:01 crc kubenswrapper[5117]: W0123 09:30:01.240911 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-pod3f7f0f2f_0ed2_4a0b_a7b0_3bf3bcdeec2e.slice/crio-77a1c4307b919864fc1b61d1feba327f257bac22055f73da07f7e03671b03362 WatchSource:0}: Error finding container 77a1c4307b919864fc1b61d1feba327f257bac22055f73da07f7e03671b03362: Status 404 returned error can't find the container with id 77a1c4307b919864fc1b61d1feba327f257bac22055f73da07f7e03671b03362 Jan 23 09:30:01 crc kubenswrapper[5117]: I0123 09:30:01.644050 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" event={"ID":"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e","Type":"ContainerStarted","Data":"77a1c4307b919864fc1b61d1feba327f257bac22055f73da07f7e03671b03362"} Jan 23 09:30:01 crc kubenswrapper[5117]: I0123 09:30:01.646102 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" event={"ID":"099837f0-dc7e-4b26-9605-a03e93899b1b","Type":"ContainerStarted","Data":"2ead5977b6d1e436efbded07ed7419352cb407806033c36961afd6e19d5c2315"} Jan 23 09:30:02 crc kubenswrapper[5117]: I0123 09:30:02.653908 5117 generic.go:358] "Generic (PLEG): container finished" podID="3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" containerID="e6baa35d004ede2d80de9d9489ffda2059d307c951da0c8e085fc75c9d6d0ff4" exitCode=0 Jan 23 09:30:02 crc kubenswrapper[5117]: I0123 09:30:02.653977 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" event={"ID":"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e","Type":"ContainerDied","Data":"e6baa35d004ede2d80de9d9489ffda2059d307c951da0c8e085fc75c9d6d0ff4"} Jan 23 09:30:03 crc kubenswrapper[5117]: I0123 09:30:03.666500 5117 generic.go:358] "Generic (PLEG): container finished" podID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerID="c430296bd8ba475224735dc09cbcec0f002cd47422ef4fb5344cbc5fe5f91fe5" exitCode=0 Jan 23 09:30:03 crc kubenswrapper[5117]: I0123 09:30:03.666529 5117 generic.go:358] "Generic (PLEG): container finished" podID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerID="4900a58f67e3f22eac850ced838dac2e4c2e500fe3faf3d9e2f20b290ac51bca" exitCode=0 Jan 23 09:30:03 crc kubenswrapper[5117]: I0123 09:30:03.666549 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" event={"ID":"aef78462-cc9e-4454-b6a5-1ea7959874bb","Type":"ContainerDied","Data":"c430296bd8ba475224735dc09cbcec0f002cd47422ef4fb5344cbc5fe5f91fe5"} Jan 23 09:30:03 crc kubenswrapper[5117]: I0123 09:30:03.666615 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" event={"ID":"aef78462-cc9e-4454-b6a5-1ea7959874bb","Type":"ContainerDied","Data":"4900a58f67e3f22eac850ced838dac2e4c2e500fe3faf3d9e2f20b290ac51bca"} Jan 23 09:30:03 crc kubenswrapper[5117]: I0123 09:30:03.893498 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.002078 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-7b2xb\" (UniqueName: \"kubernetes.io/projected/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-kube-api-access-7b2xb\") pod \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.002263 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-secret-volume\") pod \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.002288 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-config-volume\") pod \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\" (UID: \"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e\") " Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.003121 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-config-volume" (OuterVolumeSpecName: "config-volume") pod "3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" (UID: "3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e"). InnerVolumeSpecName "config-volume". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.008086 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/secret/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-secret-volume" (OuterVolumeSpecName: "secret-volume") pod "3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" (UID: "3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e"). InnerVolumeSpecName "secret-volume". PluginName "kubernetes.io/secret", VolumeGIDValue "" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.008324 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-kube-api-access-7b2xb" (OuterVolumeSpecName: "kube-api-access-7b2xb") pod "3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" (UID: "3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e"). InnerVolumeSpecName "kube-api-access-7b2xb". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.103956 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-7b2xb\" (UniqueName: \"kubernetes.io/projected/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-kube-api-access-7b2xb\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.103989 5117 reconciler_common.go:299] "Volume detached for volume \"secret-volume\" (UniqueName: \"kubernetes.io/secret/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-secret-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.103998 5117 reconciler_common.go:299] "Volume detached for volume \"config-volume\" (UniqueName: \"kubernetes.io/configmap/3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e-config-volume\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.676479 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.676531 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-operator-lifecycle-manager/collect-profiles-29486010-xq5sj" event={"ID":"3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e","Type":"ContainerDied","Data":"77a1c4307b919864fc1b61d1feba327f257bac22055f73da07f7e03671b03362"} Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.676605 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="77a1c4307b919864fc1b61d1feba327f257bac22055f73da07f7e03671b03362" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.943201 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.962807 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f"] Jan 23 09:30:04 crc kubenswrapper[5117]: I0123 09:30:04.969236 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-operator-lifecycle-manager/collect-profiles-29485965-7b78f"] Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.120708 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-config\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.120828 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-healthcheck-log\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.120886 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-kgcxw\" (UniqueName: \"kubernetes.io/projected/aef78462-cc9e-4454-b6a5-1ea7959874bb-kube-api-access-kgcxw\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.120954 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-sensubility-config\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.121044 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-publisher\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.121072 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-entrypoint-script\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.121109 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-entrypoint-script\") pod \"aef78462-cc9e-4454-b6a5-1ea7959874bb\" (UID: \"aef78462-cc9e-4454-b6a5-1ea7959874bb\") " Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.126370 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/aef78462-cc9e-4454-b6a5-1ea7959874bb-kube-api-access-kgcxw" (OuterVolumeSpecName: "kube-api-access-kgcxw") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "kube-api-access-kgcxw". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.138473 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-config" (OuterVolumeSpecName: "collectd-config") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "collectd-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.139969 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-publisher" (OuterVolumeSpecName: "ceilometer-publisher") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "ceilometer-publisher". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.140720 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-entrypoint-script" (OuterVolumeSpecName: "collectd-entrypoint-script") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "collectd-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.142598 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-entrypoint-script" (OuterVolumeSpecName: "ceilometer-entrypoint-script") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "ceilometer-entrypoint-script". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.143638 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-sensubility-config" (OuterVolumeSpecName: "sensubility-config") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "sensubility-config". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.151605 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-healthcheck-log" (OuterVolumeSpecName: "healthcheck-log") pod "aef78462-cc9e-4454-b6a5-1ea7959874bb" (UID: "aef78462-cc9e-4454-b6a5-1ea7959874bb"). InnerVolumeSpecName "healthcheck-log". PluginName "kubernetes.io/configmap", VolumeGIDValue "" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223295 5117 reconciler_common.go:299] "Volume detached for volume \"ceilometer-publisher\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-publisher\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223336 5117 reconciler_common.go:299] "Volume detached for volume \"collectd-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223371 5117 reconciler_common.go:299] "Volume detached for volume \"ceilometer-entrypoint-script\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-ceilometer-entrypoint-script\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223384 5117 reconciler_common.go:299] "Volume detached for volume \"collectd-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-collectd-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223393 5117 reconciler_common.go:299] "Volume detached for volume \"healthcheck-log\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-healthcheck-log\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223401 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-kgcxw\" (UniqueName: \"kubernetes.io/projected/aef78462-cc9e-4454-b6a5-1ea7959874bb-kube-api-access-kgcxw\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.223410 5117 reconciler_common.go:299] "Volume detached for volume \"sensubility-config\" (UniqueName: \"kubernetes.io/configmap/aef78462-cc9e-4454-b6a5-1ea7959874bb-sensubility-config\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.685679 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" event={"ID":"099837f0-dc7e-4b26-9605-a03e93899b1b","Type":"ContainerStarted","Data":"f0257da8e2fd91def2c79d223d616f413a6bae47bbff12d12e71d8a30dc80d97"} Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.688448 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" event={"ID":"aef78462-cc9e-4454-b6a5-1ea7959874bb","Type":"ContainerDied","Data":"7857046de8b52bfb5ea40702ee107fb411ea507bb513f2e139567212e967335a"} Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.688482 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="7857046de8b52bfb5ea40702ee107fb411ea507bb513f2e139567212e967335a" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.688542 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/stf-smoketest-smoke1-hn2w6" Jan 23 09:30:05 crc kubenswrapper[5117]: I0123 09:30:05.710191 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" podStartSLOduration=2.096181502 podStartE2EDuration="5.710162746s" podCreationTimestamp="2026-01-23 09:30:00 +0000 UTC" firstStartedPulling="2026-01-23 09:30:00.907570831 +0000 UTC m=+2212.663695857" lastFinishedPulling="2026-01-23 09:30:04.521552075 +0000 UTC m=+2216.277677101" observedRunningTime="2026-01-23 09:30:05.701382081 +0000 UTC m=+2217.457507107" watchObservedRunningTime="2026-01-23 09:30:05.710162746 +0000 UTC m=+2217.466287772" Jan 23 09:30:06 crc kubenswrapper[5117]: I0123 09:30:06.701213 5117 generic.go:358] "Generic (PLEG): container finished" podID="099837f0-dc7e-4b26-9605-a03e93899b1b" containerID="f0257da8e2fd91def2c79d223d616f413a6bae47bbff12d12e71d8a30dc80d97" exitCode=0 Jan 23 09:30:06 crc kubenswrapper[5117]: I0123 09:30:06.701398 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" event={"ID":"099837f0-dc7e-4b26-9605-a03e93899b1b","Type":"ContainerDied","Data":"f0257da8e2fd91def2c79d223d616f413a6bae47bbff12d12e71d8a30dc80d97"} Jan 23 09:30:06 crc kubenswrapper[5117]: I0123 09:30:06.781168 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="f2500020-ee51-4792-a6d0-ca4c0f0fdec4" path="/var/lib/kubelet/pods/f2500020-ee51-4792-a6d0-ca4c0f0fdec4/volumes" Jan 23 09:30:06 crc kubenswrapper[5117]: I0123 09:30:06.869116 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-hn2w6_aef78462-cc9e-4454-b6a5-1ea7959874bb/smoketest-collectd/0.log" Jan 23 09:30:07 crc kubenswrapper[5117]: I0123 09:30:07.173611 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_stf-smoketest-smoke1-hn2w6_aef78462-cc9e-4454-b6a5-1ea7959874bb/smoketest-ceilometer/0.log" Jan 23 09:30:07 crc kubenswrapper[5117]: I0123 09:30:07.476664 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-interconnect-55bf8d5cb-q72v2_c12ea6ce-5bef-42c3-b1ad-71826c74f0ce/default-interconnect/0.log" Jan 23 09:30:07 crc kubenswrapper[5117]: I0123 09:30:07.716635 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6_79b9017f-cee9-4b84-a6d2-1bd8843c9538/bridge/2.log" Jan 23 09:30:07 crc kubenswrapper[5117]: I0123 09:30:07.993051 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:07 crc kubenswrapper[5117]: I0123 09:30:07.998192 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-meter-smartgateway-7f8f5c6486-sdtx6_79b9017f-cee9-4b84-a6d2-1bd8843c9538/sg-core/0.log" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.061994 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-ssmnv\" (UniqueName: \"kubernetes.io/projected/099837f0-dc7e-4b26-9605-a03e93899b1b-kube-api-access-ssmnv\") pod \"099837f0-dc7e-4b26-9605-a03e93899b1b\" (UID: \"099837f0-dc7e-4b26-9605-a03e93899b1b\") " Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.069269 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/099837f0-dc7e-4b26-9605-a03e93899b1b-kube-api-access-ssmnv" (OuterVolumeSpecName: "kube-api-access-ssmnv") pod "099837f0-dc7e-4b26-9605-a03e93899b1b" (UID: "099837f0-dc7e-4b26-9605-a03e93899b1b"). InnerVolumeSpecName "kube-api-access-ssmnv". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.164051 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-ssmnv\" (UniqueName: \"kubernetes.io/projected/099837f0-dc7e-4b26-9605-a03e93899b1b-kube-api-access-ssmnv\") on node \"crc\" DevicePath \"\"" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.296302 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4_23260332-e384-4fc3-bcaf-3034ddf99446/bridge/2.log" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.595736 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-coll-event-smartgateway-86cb4ff9c9-qtpd4_23260332-e384-4fc3-bcaf-3034ddf99446/sg-core/0.log" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.718049 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.718074 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486010-hnvf5" event={"ID":"099837f0-dc7e-4b26-9605-a03e93899b1b","Type":"ContainerDied","Data":"2ead5977b6d1e436efbded07ed7419352cb407806033c36961afd6e19d5c2315"} Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.718112 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="2ead5977b6d1e436efbded07ed7419352cb407806033c36961afd6e19d5c2315" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.756255 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29486004-zx4tx"] Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.762780 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29486004-zx4tx"] Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.796663 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d8028d1d-f059-45e1-bf54-3ce033e18492" path="/var/lib/kubelet/pods/d8028d1d-f059-45e1-bf54-3ce033e18492/volumes" Jan 23 09:30:08 crc kubenswrapper[5117]: I0123 09:30:08.905666 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5_4384e25c-87ff-41fc-8c37-e85b49dc0035/bridge/2.log" Jan 23 09:30:09 crc kubenswrapper[5117]: I0123 09:30:09.205879 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-meter-smartgateway-c9f4bb7dc-w8fp5_4384e25c-87ff-41fc-8c37-e85b49dc0035/sg-core/0.log" Jan 23 09:30:09 crc kubenswrapper[5117]: I0123 09:30:09.536650 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm_966f83ef-f268-49d3-acca-8efb03045554/bridge/2.log" Jan 23 09:30:09 crc kubenswrapper[5117]: I0123 09:30:09.791115 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-ceil-event-smartgateway-74797f95d6-7gzqm_966f83ef-f268-49d3-acca-8efb03045554/sg-core/0.log" Jan 23 09:30:10 crc kubenswrapper[5117]: I0123 09:30:10.099019 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt_9f58aef0-7287-43a4-b280-bbf650fec94d/bridge/2.log" Jan 23 09:30:10 crc kubenswrapper[5117]: I0123 09:30:10.352347 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-cloud1-sens-meter-smartgateway-58c78bbf69-dpjvt_9f58aef0-7287-43a4-b280-bbf650fec94d/sg-core/0.log" Jan 23 09:30:13 crc kubenswrapper[5117]: I0123 09:30:13.399921 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-7db5cb89d4-jkccw_8f53b642-43da-4d64-a951-124d3f32f87c/operator/0.log" Jan 23 09:30:13 crc kubenswrapper[5117]: I0123 09:30:13.696459 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_prometheus-default-0_6853448b-d202-4d14-ba3b-20f05356a3c4/prometheus/0.log" Jan 23 09:30:14 crc kubenswrapper[5117]: I0123 09:30:14.002394 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_elasticsearch-es-default-0_0e58c26d-fd70-4f32-b491-ae446b4c1769/elasticsearch/0.log" Jan 23 09:30:14 crc kubenswrapper[5117]: I0123 09:30:14.265259 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_default-snmp-webhook-694dc457d5-vrgvk_0b3a11d9-8f77-4a65-a8f3-a27b911f9fa9/prometheus-webhook-snmp/0.log" Jan 23 09:30:14 crc kubenswrapper[5117]: I0123 09:30:14.531184 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_alertmanager-default-0_129d4009-3ec8-41de-8bbe-cd549253e78d/alertmanager/0.log" Jan 23 09:30:32 crc kubenswrapper[5117]: I0123 09:30:32.526836 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_service-telemetry-operator-6b94fdf6bd-pbrqg_b89a3b39-385f-45b6-969f-302f4368f39c/operator/0.log" Jan 23 09:30:35 crc kubenswrapper[5117]: I0123 09:30:35.818372 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_smart-gateway-operator-7db5cb89d4-jkccw_8f53b642-43da-4d64-a951-124d3f32f87c/operator/0.log" Jan 23 09:30:36 crc kubenswrapper[5117]: I0123 09:30:36.089570 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/service-telemetry_qdr-test_bbafda5a-c4f1-4a92-b261-66678e272673/qdr/0.log" Jan 23 09:30:38 crc kubenswrapper[5117]: I0123 09:30:38.109595 5117 scope.go:117] "RemoveContainer" containerID="537c1d95d21e174b5d36e2f9e85af6bae72eb5c95b7fa606fd1d8e0581206aab" Jan 23 09:30:38 crc kubenswrapper[5117]: I0123 09:30:38.201209 5117 scope.go:117] "RemoveContainer" containerID="0f21a485b03f8eda6772faca425ad22d93102675a3c10c63223bd2cd5eac91c5" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.105573 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-must-gather-rdz5n/must-gather-swfvf"] Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.106987 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerName="smoketest-ceilometer" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107004 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerName="smoketest-ceilometer" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107027 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="099837f0-dc7e-4b26-9605-a03e93899b1b" containerName="oc" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107034 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="099837f0-dc7e-4b26-9605-a03e93899b1b" containerName="oc" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107049 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerName="smoketest-collectd" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107056 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerName="smoketest-collectd" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107092 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" containerName="collect-profiles" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107099 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" containerName="collect-profiles" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107267 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="099837f0-dc7e-4b26-9605-a03e93899b1b" containerName="oc" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107280 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerName="smoketest-collectd" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107293 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="3f7f0f2f-0ed2-4a0b-a7b0-3bf3bcdeec2e" containerName="collect-profiles" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.107303 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="aef78462-cc9e-4454-b6a5-1ea7959874bb" containerName="smoketest-ceilometer" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.516948 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rdz5n/must-gather-swfvf"] Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.517116 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.520221 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-rdz5n\"/\"openshift-service-ca.crt\"" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.520594 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-must-gather-rdz5n\"/\"default-dockercfg-wc227\"" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.520942 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-must-gather-rdz5n\"/\"kube-root-ca.crt\"" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.637352 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-must-gather-output\") pod \"must-gather-swfvf\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.637459 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-lbsjq\" (UniqueName: \"kubernetes.io/projected/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-kube-api-access-lbsjq\") pod \"must-gather-swfvf\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.739181 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-lbsjq\" (UniqueName: \"kubernetes.io/projected/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-kube-api-access-lbsjq\") pod \"must-gather-swfvf\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.739310 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-must-gather-output\") pod \"must-gather-swfvf\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.739766 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-must-gather-output\") pod \"must-gather-swfvf\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.763929 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-lbsjq\" (UniqueName: \"kubernetes.io/projected/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-kube-api-access-lbsjq\") pod \"must-gather-swfvf\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:02 crc kubenswrapper[5117]: I0123 09:31:02.835411 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:31:03 crc kubenswrapper[5117]: I0123 09:31:03.048403 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-must-gather-rdz5n/must-gather-swfvf"] Jan 23 09:31:03 crc kubenswrapper[5117]: I0123 09:31:03.183192 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdz5n/must-gather-swfvf" event={"ID":"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3","Type":"ContainerStarted","Data":"9fdd5a11dba986bdd15180c230ed524e96b6eab8cf839b86da80eff0aafa8590"} Jan 23 09:31:15 crc kubenswrapper[5117]: I0123 09:31:15.152456 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:31:15 crc kubenswrapper[5117]: I0123 09:31:15.153248 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:31:16 crc kubenswrapper[5117]: I0123 09:31:16.295541 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdz5n/must-gather-swfvf" event={"ID":"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3","Type":"ContainerStarted","Data":"22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb"} Jan 23 09:31:16 crc kubenswrapper[5117]: I0123 09:31:16.296119 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdz5n/must-gather-swfvf" event={"ID":"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3","Type":"ContainerStarted","Data":"92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af"} Jan 23 09:31:16 crc kubenswrapper[5117]: I0123 09:31:16.312905 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-must-gather-rdz5n/must-gather-swfvf" podStartSLOduration=1.982762294 podStartE2EDuration="14.312883244s" podCreationTimestamp="2026-01-23 09:31:02 +0000 UTC" firstStartedPulling="2026-01-23 09:31:03.047423036 +0000 UTC m=+2274.803548062" lastFinishedPulling="2026-01-23 09:31:15.377543986 +0000 UTC m=+2287.133669012" observedRunningTime="2026-01-23 09:31:16.308826137 +0000 UTC m=+2288.064951183" watchObservedRunningTime="2026-01-23 09:31:16.312883244 +0000 UTC m=+2288.069008270" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.595296 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/community-operators-48dvs"] Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.738795 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48dvs"] Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.738959 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.819989 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-utilities\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.820192 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-catalog-content\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.820242 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-jv52l\" (UniqueName: \"kubernetes.io/projected/e603bb9b-c233-477a-a242-4beac60b50b3-kube-api-access-jv52l\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.921893 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-utilities\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.921967 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-catalog-content\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.921994 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-jv52l\" (UniqueName: \"kubernetes.io/projected/e603bb9b-c233-477a-a242-4beac60b50b3-kube-api-access-jv52l\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.922537 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-utilities\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.922616 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-catalog-content\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:25 crc kubenswrapper[5117]: I0123 09:31:25.944979 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-jv52l\" (UniqueName: \"kubernetes.io/projected/e603bb9b-c233-477a-a242-4beac60b50b3-kube-api-access-jv52l\") pod \"community-operators-48dvs\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:26 crc kubenswrapper[5117]: I0123 09:31:26.059573 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:26 crc kubenswrapper[5117]: I0123 09:31:26.650365 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/community-operators-48dvs"] Jan 23 09:31:27 crc kubenswrapper[5117]: I0123 09:31:27.392953 5117 generic.go:358] "Generic (PLEG): container finished" podID="e603bb9b-c233-477a-a242-4beac60b50b3" containerID="d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561" exitCode=0 Jan 23 09:31:27 crc kubenswrapper[5117]: I0123 09:31:27.393046 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dvs" event={"ID":"e603bb9b-c233-477a-a242-4beac60b50b3","Type":"ContainerDied","Data":"d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561"} Jan 23 09:31:27 crc kubenswrapper[5117]: I0123 09:31:27.393356 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dvs" event={"ID":"e603bb9b-c233-477a-a242-4beac60b50b3","Type":"ContainerStarted","Data":"d6bd45e8520ecbf4574a92be2381566a152f9f98242658d3b4718709af3c50d4"} Jan 23 09:31:27 crc kubenswrapper[5117]: I0123 09:31:27.993902 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["service-telemetry/infrawatch-operators-865q7"] Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.384632 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-865q7"] Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.386609 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.478425 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-6sv48\" (UniqueName: \"kubernetes.io/projected/b3af73c4-5b19-4d7b-a0db-32822a2c24d6-kube-api-access-6sv48\") pod \"infrawatch-operators-865q7\" (UID: \"b3af73c4-5b19-4d7b-a0db-32822a2c24d6\") " pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.580881 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-6sv48\" (UniqueName: \"kubernetes.io/projected/b3af73c4-5b19-4d7b-a0db-32822a2c24d6-kube-api-access-6sv48\") pod \"infrawatch-operators-865q7\" (UID: \"b3af73c4-5b19-4d7b-a0db-32822a2c24d6\") " pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.599995 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-6sv48\" (UniqueName: \"kubernetes.io/projected/b3af73c4-5b19-4d7b-a0db-32822a2c24d6-kube-api-access-6sv48\") pod \"infrawatch-operators-865q7\" (UID: \"b3af73c4-5b19-4d7b-a0db-32822a2c24d6\") " pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.707176 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:29 crc kubenswrapper[5117]: I0123 09:31:29.967822 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["service-telemetry/infrawatch-operators-865q7"] Jan 23 09:31:29 crc kubenswrapper[5117]: W0123 09:31:29.970373 5117 manager.go:1169] Failed to process watch event {EventType:0 Name:/kubepods.slice/kubepods-burstable.slice/kubepods-burstable-podb3af73c4_5b19_4d7b_a0db_32822a2c24d6.slice/crio-da6f9dfaa4de88db6a7594636ae4bc2f29e76627346ca47475900a86f513cae7 WatchSource:0}: Error finding container da6f9dfaa4de88db6a7594636ae4bc2f29e76627346ca47475900a86f513cae7: Status 404 returned error can't find the container with id da6f9dfaa4de88db6a7594636ae4bc2f29e76627346ca47475900a86f513cae7 Jan 23 09:31:30 crc kubenswrapper[5117]: I0123 09:31:30.419769 5117 generic.go:358] "Generic (PLEG): container finished" podID="e603bb9b-c233-477a-a242-4beac60b50b3" containerID="bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc" exitCode=0 Jan 23 09:31:30 crc kubenswrapper[5117]: I0123 09:31:30.419869 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dvs" event={"ID":"e603bb9b-c233-477a-a242-4beac60b50b3","Type":"ContainerDied","Data":"bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc"} Jan 23 09:31:30 crc kubenswrapper[5117]: I0123 09:31:30.423082 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-865q7" event={"ID":"b3af73c4-5b19-4d7b-a0db-32822a2c24d6","Type":"ContainerStarted","Data":"e90806467a07e30da8ce753bc92b23749c786ea6a046093a6e74e40838f285bb"} Jan 23 09:31:30 crc kubenswrapper[5117]: I0123 09:31:30.423149 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-865q7" event={"ID":"b3af73c4-5b19-4d7b-a0db-32822a2c24d6","Type":"ContainerStarted","Data":"da6f9dfaa4de88db6a7594636ae4bc2f29e76627346ca47475900a86f513cae7"} Jan 23 09:31:31 crc kubenswrapper[5117]: I0123 09:31:31.434275 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dvs" event={"ID":"e603bb9b-c233-477a-a242-4beac60b50b3","Type":"ContainerStarted","Data":"22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a"} Jan 23 09:31:31 crc kubenswrapper[5117]: I0123 09:31:31.463844 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="openshift-marketplace/community-operators-48dvs" podStartSLOduration=4.131780419 podStartE2EDuration="6.46381961s" podCreationTimestamp="2026-01-23 09:31:25 +0000 UTC" firstStartedPulling="2026-01-23 09:31:27.394120286 +0000 UTC m=+2299.150245312" lastFinishedPulling="2026-01-23 09:31:29.726159487 +0000 UTC m=+2301.482284503" observedRunningTime="2026-01-23 09:31:31.45622767 +0000 UTC m=+2303.212352716" watchObservedRunningTime="2026-01-23 09:31:31.46381961 +0000 UTC m=+2303.219944646" Jan 23 09:31:31 crc kubenswrapper[5117]: I0123 09:31:31.464126 5117 pod_startup_latency_tracker.go:104] "Observed pod startup duration" pod="service-telemetry/infrawatch-operators-865q7" podStartSLOduration=4.297400536 podStartE2EDuration="4.464118288s" podCreationTimestamp="2026-01-23 09:31:27 +0000 UTC" firstStartedPulling="2026-01-23 09:31:29.973096131 +0000 UTC m=+2301.729221157" lastFinishedPulling="2026-01-23 09:31:30.139813883 +0000 UTC m=+2301.895938909" observedRunningTime="2026-01-23 09:31:30.570992621 +0000 UTC m=+2302.327117647" watchObservedRunningTime="2026-01-23 09:31:31.464118288 +0000 UTC m=+2303.220243314" Jan 23 09:31:36 crc kubenswrapper[5117]: I0123 09:31:36.060011 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:36 crc kubenswrapper[5117]: I0123 09:31:36.060287 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:36 crc kubenswrapper[5117]: I0123 09:31:36.101944 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:36 crc kubenswrapper[5117]: I0123 09:31:36.521795 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:39 crc kubenswrapper[5117]: I0123 09:31:39.707521 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="not ready" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:39 crc kubenswrapper[5117]: I0123 09:31:39.707889 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="unhealthy" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:39 crc kubenswrapper[5117]: I0123 09:31:39.741998 5117 kubelet.go:2658] "SyncLoop (probe)" probe="startup" status="started" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:40 crc kubenswrapper[5117]: I0123 09:31:40.533753 5117 kubelet.go:2658] "SyncLoop (probe)" probe="readiness" status="ready" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:42 crc kubenswrapper[5117]: I0123 09:31:42.182545 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48dvs"] Jan 23 09:31:42 crc kubenswrapper[5117]: I0123 09:31:42.182822 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-marketplace/community-operators-48dvs" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="registry-server" containerID="cri-o://22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a" gracePeriod=2 Jan 23 09:31:42 crc kubenswrapper[5117]: I0123 09:31:42.781664 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-865q7"] Jan 23 09:31:42 crc kubenswrapper[5117]: I0123 09:31:42.781928 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="service-telemetry/infrawatch-operators-865q7" podUID="b3af73c4-5b19-4d7b-a0db-32822a2c24d6" containerName="registry-server" containerID="cri-o://e90806467a07e30da8ce753bc92b23749c786ea6a046093a6e74e40838f285bb" gracePeriod=2 Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.448393 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.516762 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-jv52l\" (UniqueName: \"kubernetes.io/projected/e603bb9b-c233-477a-a242-4beac60b50b3-kube-api-access-jv52l\") pod \"e603bb9b-c233-477a-a242-4beac60b50b3\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.516860 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-catalog-content\") pod \"e603bb9b-c233-477a-a242-4beac60b50b3\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.516893 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-utilities\") pod \"e603bb9b-c233-477a-a242-4beac60b50b3\" (UID: \"e603bb9b-c233-477a-a242-4beac60b50b3\") " Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.517609 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-utilities" (OuterVolumeSpecName: "utilities") pod "e603bb9b-c233-477a-a242-4beac60b50b3" (UID: "e603bb9b-c233-477a-a242-4beac60b50b3"). InnerVolumeSpecName "utilities". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.523323 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e603bb9b-c233-477a-a242-4beac60b50b3-kube-api-access-jv52l" (OuterVolumeSpecName: "kube-api-access-jv52l") pod "e603bb9b-c233-477a-a242-4beac60b50b3" (UID: "e603bb9b-c233-477a-a242-4beac60b50b3"). InnerVolumeSpecName "kube-api-access-jv52l". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.537480 5117 generic.go:358] "Generic (PLEG): container finished" podID="e603bb9b-c233-477a-a242-4beac60b50b3" containerID="22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a" exitCode=0 Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.537563 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dvs" event={"ID":"e603bb9b-c233-477a-a242-4beac60b50b3","Type":"ContainerDied","Data":"22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a"} Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.537588 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/community-operators-48dvs" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.537608 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/community-operators-48dvs" event={"ID":"e603bb9b-c233-477a-a242-4beac60b50b3","Type":"ContainerDied","Data":"d6bd45e8520ecbf4574a92be2381566a152f9f98242658d3b4718709af3c50d4"} Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.537629 5117 scope.go:117] "RemoveContainer" containerID="22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.539101 5117 generic.go:358] "Generic (PLEG): container finished" podID="b3af73c4-5b19-4d7b-a0db-32822a2c24d6" containerID="e90806467a07e30da8ce753bc92b23749c786ea6a046093a6e74e40838f285bb" exitCode=0 Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.539157 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-865q7" event={"ID":"b3af73c4-5b19-4d7b-a0db-32822a2c24d6","Type":"ContainerDied","Data":"e90806467a07e30da8ce753bc92b23749c786ea6a046093a6e74e40838f285bb"} Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.561391 5117 scope.go:117] "RemoveContainer" containerID="bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.579610 5117 scope.go:117] "RemoveContainer" containerID="d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.584427 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-catalog-content" (OuterVolumeSpecName: "catalog-content") pod "e603bb9b-c233-477a-a242-4beac60b50b3" (UID: "e603bb9b-c233-477a-a242-4beac60b50b3"). InnerVolumeSpecName "catalog-content". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.606068 5117 scope.go:117] "RemoveContainer" containerID="22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a" Jan 23 09:31:44 crc kubenswrapper[5117]: E0123 09:31:44.606493 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a\": container with ID starting with 22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a not found: ID does not exist" containerID="22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.606525 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a"} err="failed to get container status \"22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a\": rpc error: code = NotFound desc = could not find container \"22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a\": container with ID starting with 22a59ebd381f9329003ebbd804d9aed6d56bca97938b1b533a5ea081c0d4f95a not found: ID does not exist" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.606544 5117 scope.go:117] "RemoveContainer" containerID="bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc" Jan 23 09:31:44 crc kubenswrapper[5117]: E0123 09:31:44.606902 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc\": container with ID starting with bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc not found: ID does not exist" containerID="bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.606922 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc"} err="failed to get container status \"bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc\": rpc error: code = NotFound desc = could not find container \"bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc\": container with ID starting with bc6d173c1f8d291ac3f6aaebd710f0c098bbfbe0872c44585c6200ac8dfcf3dc not found: ID does not exist" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.606935 5117 scope.go:117] "RemoveContainer" containerID="d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561" Jan 23 09:31:44 crc kubenswrapper[5117]: E0123 09:31:44.607344 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561\": container with ID starting with d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561 not found: ID does not exist" containerID="d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.607382 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561"} err="failed to get container status \"d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561\": rpc error: code = NotFound desc = could not find container \"d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561\": container with ID starting with d3ce81ee0a6a6daad0b5735b4dd51035c18125df0a2e2ea2da8f873c3ac3c561 not found: ID does not exist" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.618266 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-jv52l\" (UniqueName: \"kubernetes.io/projected/e603bb9b-c233-477a-a242-4beac60b50b3-kube-api-access-jv52l\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.618298 5117 reconciler_common.go:299] "Volume detached for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-catalog-content\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.618311 5117 reconciler_common.go:299] "Volume detached for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/e603bb9b-c233-477a-a242-4beac60b50b3-utilities\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.858557 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-marketplace/community-operators-48dvs"] Jan 23 09:31:44 crc kubenswrapper[5117]: I0123 09:31:44.865280 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-marketplace/community-operators-48dvs"] Jan 23 09:31:45 crc kubenswrapper[5117]: I0123 09:31:45.064068 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:31:45 crc kubenswrapper[5117]: I0123 09:31:45.064190 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.030251 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.138105 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-6sv48\" (UniqueName: \"kubernetes.io/projected/b3af73c4-5b19-4d7b-a0db-32822a2c24d6-kube-api-access-6sv48\") pod \"b3af73c4-5b19-4d7b-a0db-32822a2c24d6\" (UID: \"b3af73c4-5b19-4d7b-a0db-32822a2c24d6\") " Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.144178 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/b3af73c4-5b19-4d7b-a0db-32822a2c24d6-kube-api-access-6sv48" (OuterVolumeSpecName: "kube-api-access-6sv48") pod "b3af73c4-5b19-4d7b-a0db-32822a2c24d6" (UID: "b3af73c4-5b19-4d7b-a0db-32822a2c24d6"). InnerVolumeSpecName "kube-api-access-6sv48". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.239761 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-6sv48\" (UniqueName: \"kubernetes.io/projected/b3af73c4-5b19-4d7b-a0db-32822a2c24d6-kube-api-access-6sv48\") on node \"crc\" DevicePath \"\"" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.566861 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="service-telemetry/infrawatch-operators-865q7" event={"ID":"b3af73c4-5b19-4d7b-a0db-32822a2c24d6","Type":"ContainerDied","Data":"da6f9dfaa4de88db6a7594636ae4bc2f29e76627346ca47475900a86f513cae7"} Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.566946 5117 scope.go:117] "RemoveContainer" containerID="e90806467a07e30da8ce753bc92b23749c786ea6a046093a6e74e40838f285bb" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.566975 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="service-telemetry/infrawatch-operators-865q7" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.606381 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["service-telemetry/infrawatch-operators-865q7"] Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.613066 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["service-telemetry/infrawatch-operators-865q7"] Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.779111 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="b3af73c4-5b19-4d7b-a0db-32822a2c24d6" path="/var/lib/kubelet/pods/b3af73c4-5b19-4d7b-a0db-32822a2c24d6/volumes" Jan 23 09:31:46 crc kubenswrapper[5117]: I0123 09:31:46.779968 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" path="/var/lib/kubelet/pods/e603bb9b-c233-477a-a242-4beac60b50b3/volumes" Jan 23 09:31:57 crc kubenswrapper[5117]: I0123 09:31:57.478956 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_control-plane-machine-set-operator-75ffdb6fcd-v4h4l_48769271-1362-41ce-a21c-ecf6d869aece/control-plane-machine-set-operator/0.log" Jan 23 09:31:57 crc kubenswrapper[5117]: I0123 09:31:57.598682 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-gr9mz_1f4bb843-1670-4180-9fe6-43c005930de0/machine-api-operator/0.log" Jan 23 09:31:57 crc kubenswrapper[5117]: I0123 09:31:57.608834 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-machine-api_machine-api-operator-755bb95488-gr9mz_1f4bb843-1670-4180-9fe6-43c005930de0/kube-rbac-proxy/0.log" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.137717 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486012-mvdgc"] Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138640 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="registry-server" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138654 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="registry-server" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138679 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="extract-utilities" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138685 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="extract-utilities" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138692 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="extract-content" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138699 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="extract-content" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138708 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="b3af73c4-5b19-4d7b-a0db-32822a2c24d6" containerName="registry-server" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138713 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="b3af73c4-5b19-4d7b-a0db-32822a2c24d6" containerName="registry-server" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138832 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="b3af73c4-5b19-4d7b-a0db-32822a2c24d6" containerName="registry-server" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.138850 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e603bb9b-c233-477a-a242-4beac60b50b3" containerName="registry-server" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.402646 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486012-mvdgc"] Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.402853 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.405118 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.405497 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.406111 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.439194 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-rd4rd\" (UniqueName: \"kubernetes.io/projected/e07da340-32c1-4ffb-bbc4-1d386f902df4-kube-api-access-rd4rd\") pod \"auto-csr-approver-29486012-mvdgc\" (UID: \"e07da340-32c1-4ffb-bbc4-1d386f902df4\") " pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.540292 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-rd4rd\" (UniqueName: \"kubernetes.io/projected/e07da340-32c1-4ffb-bbc4-1d386f902df4-kube-api-access-rd4rd\") pod \"auto-csr-approver-29486012-mvdgc\" (UID: \"e07da340-32c1-4ffb-bbc4-1d386f902df4\") " pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.562685 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-rd4rd\" (UniqueName: \"kubernetes.io/projected/e07da340-32c1-4ffb-bbc4-1d386f902df4-kube-api-access-rd4rd\") pod \"auto-csr-approver-29486012-mvdgc\" (UID: \"e07da340-32c1-4ffb-bbc4-1d386f902df4\") " pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.725365 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:00 crc kubenswrapper[5117]: I0123 09:32:00.995056 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486012-mvdgc"] Jan 23 09:32:01 crc kubenswrapper[5117]: I0123 09:32:01.006608 5117 provider.go:93] Refreshing cache for provider: *credentialprovider.defaultDockerConfigProvider Jan 23 09:32:01 crc kubenswrapper[5117]: I0123 09:32:01.678759 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" event={"ID":"e07da340-32c1-4ffb-bbc4-1d386f902df4","Type":"ContainerStarted","Data":"0e5e74069a93fee0d7933c9ea1bfec7ad42017dd6e917608659d121c5d5eb601"} Jan 23 09:32:03 crc kubenswrapper[5117]: I0123 09:32:03.697905 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" event={"ID":"e07da340-32c1-4ffb-bbc4-1d386f902df4","Type":"ContainerStarted","Data":"3502d1a7b9caa55fb51f276cd571b3f22d0d5c5799a1acfa287178ad658a2632"} Jan 23 09:32:04 crc kubenswrapper[5117]: I0123 09:32:04.709119 5117 generic.go:358] "Generic (PLEG): container finished" podID="e07da340-32c1-4ffb-bbc4-1d386f902df4" containerID="3502d1a7b9caa55fb51f276cd571b3f22d0d5c5799a1acfa287178ad658a2632" exitCode=0 Jan 23 09:32:04 crc kubenswrapper[5117]: I0123 09:32:04.709250 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" event={"ID":"e07da340-32c1-4ffb-bbc4-1d386f902df4","Type":"ContainerDied","Data":"3502d1a7b9caa55fb51f276cd571b3f22d0d5c5799a1acfa287178ad658a2632"} Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.024192 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.133936 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-rd4rd\" (UniqueName: \"kubernetes.io/projected/e07da340-32c1-4ffb-bbc4-1d386f902df4-kube-api-access-rd4rd\") pod \"e07da340-32c1-4ffb-bbc4-1d386f902df4\" (UID: \"e07da340-32c1-4ffb-bbc4-1d386f902df4\") " Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.140402 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/e07da340-32c1-4ffb-bbc4-1d386f902df4-kube-api-access-rd4rd" (OuterVolumeSpecName: "kube-api-access-rd4rd") pod "e07da340-32c1-4ffb-bbc4-1d386f902df4" (UID: "e07da340-32c1-4ffb-bbc4-1d386f902df4"). InnerVolumeSpecName "kube-api-access-rd4rd". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.235707 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-rd4rd\" (UniqueName: \"kubernetes.io/projected/e07da340-32c1-4ffb-bbc4-1d386f902df4-kube-api-access-rd4rd\") on node \"crc\" DevicePath \"\"" Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.724420 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.724441 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486012-mvdgc" event={"ID":"e07da340-32c1-4ffb-bbc4-1d386f902df4","Type":"ContainerDied","Data":"0e5e74069a93fee0d7933c9ea1bfec7ad42017dd6e917608659d121c5d5eb601"} Jan 23 09:32:06 crc kubenswrapper[5117]: I0123 09:32:06.724489 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="0e5e74069a93fee0d7933c9ea1bfec7ad42017dd6e917608659d121c5d5eb601" Jan 23 09:32:07 crc kubenswrapper[5117]: I0123 09:32:07.094817 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29486006-mqgks"] Jan 23 09:32:07 crc kubenswrapper[5117]: I0123 09:32:07.101829 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29486006-mqgks"] Jan 23 09:32:08 crc kubenswrapper[5117]: I0123 09:32:08.782825 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="d28b8a9f-3568-47eb-9cb8-4b958070464b" path="/var/lib/kubelet/pods/d28b8a9f-3568-47eb-9cb8-4b958070464b/volumes" Jan 23 09:32:09 crc kubenswrapper[5117]: I0123 09:32:09.627549 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-858d87f86b-rdd8d_8247d078-8f58-4aa8-a4b3-cb8c726f45b0/cert-manager-controller/0.log" Jan 23 09:32:09 crc kubenswrapper[5117]: I0123 09:32:09.785506 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-cainjector-7dbf76d5c8-2jzcn_2e37c906-4fc2-4bfb-ae26-08ce2c858c5f/cert-manager-cainjector/0.log" Jan 23 09:32:09 crc kubenswrapper[5117]: I0123 09:32:09.849958 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/cert-manager_cert-manager-webhook-7894b5b9b4-7597f_ba66ec84-7f70-4462-a90a-9d91db534c99/cert-manager-webhook/0.log" Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.063984 5117 patch_prober.go:28] interesting pod/machine-config-daemon-qfh6g container/machine-config-daemon namespace/openshift-machine-config-operator: Liveness probe status=failure output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" start-of-body= Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.064684 5117 prober.go:120] "Probe failed" probeType="Liveness" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" probeResult="failure" output="Get \"http://127.0.0.1:8798/health\": dial tcp 127.0.0.1:8798: connect: connection refused" Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.064742 5117 kubelet.go:2658] "SyncLoop (probe)" probe="liveness" status="unhealthy" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.065390 5117 kuberuntime_manager.go:1107] "Message for Container of pod" containerName="machine-config-daemon" containerStatusID={"Type":"cri-o","ID":"44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62"} pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" containerMessage="Container machine-config-daemon failed liveness probe, will be restarted" Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.065458 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerName="machine-config-daemon" containerID="cri-o://44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" gracePeriod=600 Jan 23 09:32:15 crc kubenswrapper[5117]: E0123 09:32:15.694714 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.798142 5117 generic.go:358] "Generic (PLEG): container finished" podID="2d41b436-a78c-412b-b56c-54b8d73381e6" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" exitCode=0 Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.798180 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" event={"ID":"2d41b436-a78c-412b-b56c-54b8d73381e6","Type":"ContainerDied","Data":"44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62"} Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.798628 5117 scope.go:117] "RemoveContainer" containerID="994fa97d1cb60133ddd28a5a7c053d2a40f4fd74acc6d90fde40e86efd34b82f" Jan 23 09:32:15 crc kubenswrapper[5117]: I0123 09:32:15.799328 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:32:15 crc kubenswrapper[5117]: E0123 09:32:15.799696 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:32:23 crc kubenswrapper[5117]: I0123 09:32:23.448233 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kzqgs_b2f00157-5589-44da-862e-a7686842803f/prometheus-operator/0.log" Jan 23 09:32:23 crc kubenswrapper[5117]: I0123 09:32:23.671669 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2_0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052/prometheus-operator-admission-webhook/0.log" Jan 23 09:32:23 crc kubenswrapper[5117]: I0123 09:32:23.702524 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4_d3846cc0-7f86-492a-a1df-8d92f3fd5094/prometheus-operator-admission-webhook/0.log" Jan 23 09:32:23 crc kubenswrapper[5117]: I0123 09:32:23.881210 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-4cs2d_e628d153-ac5d-4e3f-8b44-e30ada549a31/operator/0.log" Jan 23 09:32:23 crc kubenswrapper[5117]: I0123 09:32:23.908023 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-z7w5m_bb3c5e0d-73e6-4766-9f06-dc44d438a55c/perses-operator/0.log" Jan 23 09:32:29 crc kubenswrapper[5117]: I0123 09:32:29.770164 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:32:29 crc kubenswrapper[5117]: E0123 09:32:29.772045 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.035054 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/util/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.188685 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/util/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.233435 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/pull/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.245307 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/pull/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.444176 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/util/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.452914 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/extract/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.480511 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_1f59f640c8a0eb1a7b0f26c81382bbdde784d03eb439a940bb8da3931a5ss8z_2d8ec4bc-ddd0-43a5-b6ac-7001f740ec7c/pull/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.594737 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/util/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.795295 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/pull/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.795295 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/pull/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.815625 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/util/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.974931 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/pull/0.log" Jan 23 09:32:37 crc kubenswrapper[5117]: I0123 09:32:37.981730 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/util/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.005382 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_6e3e74c24700cc2bb66271d960117ff0976dc779e6a3bc37905b952e8fxlb7w_66db71e8-761f-4bae-a7d3-c48c50f8c4f4/extract/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.142687 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/util/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.283071 5117 scope.go:117] "RemoveContainer" containerID="de23e6420da0a0ec66f7ad880fb305728b32f10df797214aa643585ff1df5522" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.354710 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/util/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.415677 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/pull/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.415939 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/pull/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.522419 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/pull/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.582800 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/util/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.652530 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_8ed862a309935d5a1c8012df79b93f7fb46e029d4689f7f6ddcb9e7f5e8ftnd_672c3b18-7345-48b9-ad22-86920cf6e02d/extract/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.753108 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/util/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.913884 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/pull/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.915412 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/util/0.log" Jan 23 09:32:38 crc kubenswrapper[5117]: I0123 09:32:38.919048 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/pull/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.170320 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/util/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.216865 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/extract/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.252360 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_98629960b44b381d1a86cff1d1439a8df43509c9ad24579158c59d0f082txwf_929b31d5-d5f7-4610-960d-bcb4817b7eee/pull/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.370015 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/extract-utilities/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.561093 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/extract-content/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.571574 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/extract-utilities/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.592483 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/extract-content/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.749970 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/extract-utilities/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.750516 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/extract-content/0.log" Jan 23 09:32:39 crc kubenswrapper[5117]: I0123 09:32:39.957099 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/extract-utilities/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.125685 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_certified-operators-2wlmv_63caebec-168b-4984-9847-91f566659602/registry-server/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.185600 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/extract-content/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.214974 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/extract-content/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.215217 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/extract-utilities/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.396077 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/extract-content/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.396124 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/extract-utilities/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.440887 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_marketplace-operator-547dbd544d-bfxhb_50c35dac-9534-4fed-a34f-d2cabebef0e6/marketplace-operator/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.634067 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/extract-utilities/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.915970 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/extract-utilities/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.916993 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/extract-content/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.924812 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_community-operators-5mslb_93a67942-29ab-4f71-877a-c5e802cb444d/registry-server/0.log" Jan 23 09:32:40 crc kubenswrapper[5117]: I0123 09:32:40.934234 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/extract-content/0.log" Jan 23 09:32:41 crc kubenswrapper[5117]: I0123 09:32:41.106308 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/extract-content/0.log" Jan 23 09:32:41 crc kubenswrapper[5117]: I0123 09:32:41.128102 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/extract-utilities/0.log" Jan 23 09:32:41 crc kubenswrapper[5117]: I0123 09:32:41.522532 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-marketplace_redhat-operators-z5dbk_d4898852-a834-401f-9f74-6d33c221a86f/registry-server/0.log" Jan 23 09:32:44 crc kubenswrapper[5117]: I0123 09:32:44.771245 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:32:44 crc kubenswrapper[5117]: E0123 09:32:44.772254 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:32:53 crc kubenswrapper[5117]: I0123 09:32:53.091723 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-679bcfb6b-5gnz2_0d4518b1-d47d-45d4-a8f1-2cc8a5fe4052/prometheus-operator-admission-webhook/0.log" Jan 23 09:32:53 crc kubenswrapper[5117]: I0123 09:32:53.104989 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-9bc85b4bf-kzqgs_b2f00157-5589-44da-862e-a7686842803f/prometheus-operator/0.log" Jan 23 09:32:53 crc kubenswrapper[5117]: I0123 09:32:53.126515 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_obo-prometheus-operator-admission-webhook-679bcfb6b-f87v4_d3846cc0-7f86-492a-a1df-8d92f3fd5094/prometheus-operator-admission-webhook/0.log" Jan 23 09:32:53 crc kubenswrapper[5117]: I0123 09:32:53.250636 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_observability-operator-85c68dddb-4cs2d_e628d153-ac5d-4e3f-8b44-e30ada549a31/operator/0.log" Jan 23 09:32:53 crc kubenswrapper[5117]: I0123 09:32:53.273188 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-operators_perses-operator-669c9f96b5-z7w5m_bb3c5e0d-73e6-4766-9f06-dc44d438a55c/perses-operator/0.log" Jan 23 09:32:59 crc kubenswrapper[5117]: I0123 09:32:59.771298 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:32:59 crc kubenswrapper[5117]: E0123 09:32:59.771725 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:33:13 crc kubenswrapper[5117]: I0123 09:33:13.770431 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:33:13 crc kubenswrapper[5117]: E0123 09:33:13.771427 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:33:25 crc kubenswrapper[5117]: I0123 09:33:25.771344 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:33:25 crc kubenswrapper[5117]: E0123 09:33:25.772184 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:33:37 crc kubenswrapper[5117]: I0123 09:33:37.940667 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:33:37 crc kubenswrapper[5117]: I0123 09:33:37.941600 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-multus_multus-g7xdw_70f944bb-0390-45c1-914f-5389215db1cd/kube-multus/0.log" Jan 23 09:33:37 crc kubenswrapper[5117]: I0123 09:33:37.946526 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:33:37 crc kubenswrapper[5117]: I0123 09:33:37.948600 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-kube-controller-manager_kube-controller-manager-crc_9f0bc7fcb0822a2c13eb2d22cd8c0641/kube-controller-manager/0.log" Jan 23 09:33:39 crc kubenswrapper[5117]: I0123 09:33:39.770385 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:33:39 crc kubenswrapper[5117]: E0123 09:33:39.771005 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:33:41 crc kubenswrapper[5117]: I0123 09:33:41.522419 5117 generic.go:358] "Generic (PLEG): container finished" podID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerID="92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af" exitCode=0 Jan 23 09:33:41 crc kubenswrapper[5117]: I0123 09:33:41.522514 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-must-gather-rdz5n/must-gather-swfvf" event={"ID":"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3","Type":"ContainerDied","Data":"92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af"} Jan 23 09:33:41 crc kubenswrapper[5117]: I0123 09:33:41.523091 5117 scope.go:117] "RemoveContainer" containerID="92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af" Jan 23 09:33:41 crc kubenswrapper[5117]: I0123 09:33:41.654392 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdz5n_must-gather-swfvf_cbd6a84e-1a52-4fd2-ae81-26e3c66375a3/gather/0.log" Jan 23 09:33:47 crc kubenswrapper[5117]: I0123 09:33:47.869007 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-must-gather-rdz5n/must-gather-swfvf"] Jan 23 09:33:47 crc kubenswrapper[5117]: I0123 09:33:47.869893 5117 kuberuntime_container.go:858] "Killing container with a grace period" pod="openshift-must-gather-rdz5n/must-gather-swfvf" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="copy" containerID="cri-o://22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb" gracePeriod=2 Jan 23 09:33:47 crc kubenswrapper[5117]: I0123 09:33:47.874022 5117 status_manager.go:895] "Failed to get status for pod" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" pod="openshift-must-gather-rdz5n/must-gather-swfvf" err="pods \"must-gather-swfvf\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-rdz5n\": no relationship found between node 'crc' and this object" Jan 23 09:33:47 crc kubenswrapper[5117]: I0123 09:33:47.885375 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-must-gather-rdz5n/must-gather-swfvf"] Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.279162 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdz5n_must-gather-swfvf_cbd6a84e-1a52-4fd2-ae81-26e3c66375a3/copy/0.log" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.280063 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.282608 5117 status_manager.go:895] "Failed to get status for pod" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" pod="openshift-must-gather-rdz5n/must-gather-swfvf" err="pods \"must-gather-swfvf\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-rdz5n\": no relationship found between node 'crc' and this object" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.351660 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-must-gather-output\") pod \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.351771 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-lbsjq\" (UniqueName: \"kubernetes.io/projected/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-kube-api-access-lbsjq\") pod \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\" (UID: \"cbd6a84e-1a52-4fd2-ae81-26e3c66375a3\") " Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.358976 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-kube-api-access-lbsjq" (OuterVolumeSpecName: "kube-api-access-lbsjq") pod "cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" (UID: "cbd6a84e-1a52-4fd2-ae81-26e3c66375a3"). InnerVolumeSpecName "kube-api-access-lbsjq". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.413764 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/empty-dir/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-must-gather-output" (OuterVolumeSpecName: "must-gather-output") pod "cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" (UID: "cbd6a84e-1a52-4fd2-ae81-26e3c66375a3"). InnerVolumeSpecName "must-gather-output". PluginName "kubernetes.io/empty-dir", VolumeGIDValue "" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.454568 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-lbsjq\" (UniqueName: \"kubernetes.io/projected/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-kube-api-access-lbsjq\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.454648 5117 reconciler_common.go:299] "Volume detached for volume \"must-gather-output\" (UniqueName: \"kubernetes.io/empty-dir/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3-must-gather-output\") on node \"crc\" DevicePath \"\"" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.594165 5117 log.go:25] "Finished parsing log file" path="/var/log/pods/openshift-must-gather-rdz5n_must-gather-swfvf_cbd6a84e-1a52-4fd2-ae81-26e3c66375a3/copy/0.log" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.594718 5117 generic.go:358] "Generic (PLEG): container finished" podID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerID="22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb" exitCode=143 Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.594871 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-must-gather-rdz5n/must-gather-swfvf" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.595011 5117 scope.go:117] "RemoveContainer" containerID="22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.597012 5117 status_manager.go:895] "Failed to get status for pod" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" pod="openshift-must-gather-rdz5n/must-gather-swfvf" err="pods \"must-gather-swfvf\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-rdz5n\": no relationship found between node 'crc' and this object" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.616099 5117 status_manager.go:895] "Failed to get status for pod" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" pod="openshift-must-gather-rdz5n/must-gather-swfvf" err="pods \"must-gather-swfvf\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-rdz5n\": no relationship found between node 'crc' and this object" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.626426 5117 scope.go:117] "RemoveContainer" containerID="92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.708095 5117 scope.go:117] "RemoveContainer" containerID="22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb" Jan 23 09:33:48 crc kubenswrapper[5117]: E0123 09:33:48.708599 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb\": container with ID starting with 22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb not found: ID does not exist" containerID="22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.708680 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb"} err="failed to get container status \"22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb\": rpc error: code = NotFound desc = could not find container \"22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb\": container with ID starting with 22d153520be1f3ef56e37b93208f06fce8f15b2823e0d6c7697dfbe3a88c9dbb not found: ID does not exist" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.708708 5117 scope.go:117] "RemoveContainer" containerID="92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af" Jan 23 09:33:48 crc kubenswrapper[5117]: E0123 09:33:48.709758 5117 log.go:32] "ContainerStatus from runtime service failed" err="rpc error: code = NotFound desc = could not find container \"92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af\": container with ID starting with 92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af not found: ID does not exist" containerID="92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.709836 5117 pod_container_deletor.go:53] "DeleteContainer returned error" containerID={"Type":"cri-o","ID":"92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af"} err="failed to get container status \"92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af\": rpc error: code = NotFound desc = could not find container \"92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af\": container with ID starting with 92af46034d5d868904aa258940fe048ceea8f7647ffc94ed7e4ff5e3274157af not found: ID does not exist" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.780405 5117 status_manager.go:895] "Failed to get status for pod" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" pod="openshift-must-gather-rdz5n/must-gather-swfvf" err="pods \"must-gather-swfvf\" is forbidden: User \"system:node:crc\" cannot get resource \"pods\" in API group \"\" in the namespace \"openshift-must-gather-rdz5n\": no relationship found between node 'crc' and this object" Jan 23 09:33:48 crc kubenswrapper[5117]: I0123 09:33:48.785738 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" path="/var/lib/kubelet/pods/cbd6a84e-1a52-4fd2-ae81-26e3c66375a3/volumes" Jan 23 09:33:50 crc kubenswrapper[5117]: I0123 09:33:50.771094 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:33:50 crc kubenswrapper[5117]: E0123 09:33:50.771865 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.138459 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486014-xb2xm"] Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.139987 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="gather" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140007 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="gather" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140025 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="copy" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140033 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="copy" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140050 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="e07da340-32c1-4ffb-bbc4-1d386f902df4" containerName="oc" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140056 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="e07da340-32c1-4ffb-bbc4-1d386f902df4" containerName="oc" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140233 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="gather" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140251 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="e07da340-32c1-4ffb-bbc4-1d386f902df4" containerName="oc" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.140269 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="cbd6a84e-1a52-4fd2-ae81-26e3c66375a3" containerName="copy" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.182003 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486014-xb2xm"] Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.182057 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.184477 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.184654 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.184940 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.245188 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-z6284\" (UniqueName: \"kubernetes.io/projected/346275f9-d4fa-4493-83ef-aa8d4855bf07-kube-api-access-z6284\") pod \"auto-csr-approver-29486014-xb2xm\" (UID: \"346275f9-d4fa-4493-83ef-aa8d4855bf07\") " pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.347443 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-z6284\" (UniqueName: \"kubernetes.io/projected/346275f9-d4fa-4493-83ef-aa8d4855bf07-kube-api-access-z6284\") pod \"auto-csr-approver-29486014-xb2xm\" (UID: \"346275f9-d4fa-4493-83ef-aa8d4855bf07\") " pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.369010 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-z6284\" (UniqueName: \"kubernetes.io/projected/346275f9-d4fa-4493-83ef-aa8d4855bf07-kube-api-access-z6284\") pod \"auto-csr-approver-29486014-xb2xm\" (UID: \"346275f9-d4fa-4493-83ef-aa8d4855bf07\") " pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.501606 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:00 crc kubenswrapper[5117]: I0123 09:34:00.719755 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486014-xb2xm"] Jan 23 09:34:01 crc kubenswrapper[5117]: I0123 09:34:01.709798 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" event={"ID":"346275f9-d4fa-4493-83ef-aa8d4855bf07","Type":"ContainerStarted","Data":"cd5de976beb66b42203513aa5c9c64e10baae742d4e983f31a0a9236c05a6995"} Jan 23 09:34:01 crc kubenswrapper[5117]: I0123 09:34:01.771312 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:34:01 crc kubenswrapper[5117]: E0123 09:34:01.771926 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:34:02 crc kubenswrapper[5117]: I0123 09:34:02.719489 5117 generic.go:358] "Generic (PLEG): container finished" podID="346275f9-d4fa-4493-83ef-aa8d4855bf07" containerID="b5316389dd17df7aad15c1360e5106c4153ca4a4e9044b57162e0d86a337295b" exitCode=0 Jan 23 09:34:02 crc kubenswrapper[5117]: I0123 09:34:02.719609 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" event={"ID":"346275f9-d4fa-4493-83ef-aa8d4855bf07","Type":"ContainerDied","Data":"b5316389dd17df7aad15c1360e5106c4153ca4a4e9044b57162e0d86a337295b"} Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.013205 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.102173 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-z6284\" (UniqueName: \"kubernetes.io/projected/346275f9-d4fa-4493-83ef-aa8d4855bf07-kube-api-access-z6284\") pod \"346275f9-d4fa-4493-83ef-aa8d4855bf07\" (UID: \"346275f9-d4fa-4493-83ef-aa8d4855bf07\") " Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.109559 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/346275f9-d4fa-4493-83ef-aa8d4855bf07-kube-api-access-z6284" (OuterVolumeSpecName: "kube-api-access-z6284") pod "346275f9-d4fa-4493-83ef-aa8d4855bf07" (UID: "346275f9-d4fa-4493-83ef-aa8d4855bf07"). InnerVolumeSpecName "kube-api-access-z6284". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.203764 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-z6284\" (UniqueName: \"kubernetes.io/projected/346275f9-d4fa-4493-83ef-aa8d4855bf07-kube-api-access-z6284\") on node \"crc\" DevicePath \"\"" Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.738723 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" event={"ID":"346275f9-d4fa-4493-83ef-aa8d4855bf07","Type":"ContainerDied","Data":"cd5de976beb66b42203513aa5c9c64e10baae742d4e983f31a0a9236c05a6995"} Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.738768 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="cd5de976beb66b42203513aa5c9c64e10baae742d4e983f31a0a9236c05a6995" Jan 23 09:34:04 crc kubenswrapper[5117]: I0123 09:34:04.738941 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486014-xb2xm" Jan 23 09:34:05 crc kubenswrapper[5117]: I0123 09:34:05.070064 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29486008-wrnms"] Jan 23 09:34:05 crc kubenswrapper[5117]: I0123 09:34:05.077640 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29486008-wrnms"] Jan 23 09:34:06 crc kubenswrapper[5117]: I0123 09:34:06.781448 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="7050f784-d7f3-4287-b267-292ddd8a13f1" path="/var/lib/kubelet/pods/7050f784-d7f3-4287-b267-292ddd8a13f1/volumes" Jan 23 09:34:13 crc kubenswrapper[5117]: I0123 09:34:13.770564 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:34:13 crc kubenswrapper[5117]: E0123 09:34:13.771437 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:34:29 crc kubenswrapper[5117]: I0123 09:34:29.771508 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:34:29 crc kubenswrapper[5117]: E0123 09:34:29.772366 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:34:38 crc kubenswrapper[5117]: I0123 09:34:38.442320 5117 scope.go:117] "RemoveContainer" containerID="d2409fcace37f4e306b924d2bd79bfdfab8005f355384190d239039f69e585a9" Jan 23 09:34:40 crc kubenswrapper[5117]: I0123 09:34:40.771174 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:34:40 crc kubenswrapper[5117]: E0123 09:34:40.771813 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:34:51 crc kubenswrapper[5117]: I0123 09:34:51.771125 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:34:51 crc kubenswrapper[5117]: E0123 09:34:51.771903 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:35:05 crc kubenswrapper[5117]: I0123 09:35:05.770940 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:35:05 crc kubenswrapper[5117]: E0123 09:35:05.771870 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:35:19 crc kubenswrapper[5117]: I0123 09:35:19.772508 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:35:19 crc kubenswrapper[5117]: E0123 09:35:19.774585 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:35:31 crc kubenswrapper[5117]: I0123 09:35:31.770644 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:35:31 crc kubenswrapper[5117]: E0123 09:35:31.771464 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:35:43 crc kubenswrapper[5117]: I0123 09:35:43.772480 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:35:43 crc kubenswrapper[5117]: E0123 09:35:43.773460 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:35:54 crc kubenswrapper[5117]: I0123 09:35:54.771491 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:35:54 crc kubenswrapper[5117]: E0123 09:35:54.772330 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:36:00 crc kubenswrapper[5117]: I0123 09:36:00.135267 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-infra/auto-csr-approver-29486016-rzbh7"] Jan 23 09:36:00 crc kubenswrapper[5117]: I0123 09:36:00.136695 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="346275f9-d4fa-4493-83ef-aa8d4855bf07" containerName="oc" Jan 23 09:36:00 crc kubenswrapper[5117]: I0123 09:36:00.136713 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="346275f9-d4fa-4493-83ef-aa8d4855bf07" containerName="oc" Jan 23 09:36:00 crc kubenswrapper[5117]: I0123 09:36:00.136922 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="346275f9-d4fa-4493-83ef-aa8d4855bf07" containerName="oc" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.180304 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.183582 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"kube-root-ca.crt\"" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.183749 5117 reflector.go:430] "Caches populated" type="*v1.Secret" reflector="object-\"openshift-infra\"/\"csr-approver-sa-dockercfg-lsgx8\"" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.185390 5117 reflector.go:430] "Caches populated" type="*v1.ConfigMap" reflector="object-\"openshift-infra\"/\"openshift-service-ca.crt\"" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.193247 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486016-rzbh7"] Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.230498 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-hntzk\" (UniqueName: \"kubernetes.io/projected/1645b5f8-1fe4-4c1a-9e89-25920b3f371c-kube-api-access-hntzk\") pod \"auto-csr-approver-29486016-rzbh7\" (UID: \"1645b5f8-1fe4-4c1a-9e89-25920b3f371c\") " pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.332106 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-hntzk\" (UniqueName: \"kubernetes.io/projected/1645b5f8-1fe4-4c1a-9e89-25920b3f371c-kube-api-access-hntzk\") pod \"auto-csr-approver-29486016-rzbh7\" (UID: \"1645b5f8-1fe4-4c1a-9e89-25920b3f371c\") " pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.352500 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-hntzk\" (UniqueName: \"kubernetes.io/projected/1645b5f8-1fe4-4c1a-9e89-25920b3f371c-kube-api-access-hntzk\") pod \"auto-csr-approver-29486016-rzbh7\" (UID: \"1645b5f8-1fe4-4c1a-9e89-25920b3f371c\") " pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.503453 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:01 crc kubenswrapper[5117]: I0123 09:36:01.795584 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-infra/auto-csr-approver-29486016-rzbh7"] Jan 23 09:36:02 crc kubenswrapper[5117]: I0123 09:36:02.716208 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" event={"ID":"1645b5f8-1fe4-4c1a-9e89-25920b3f371c","Type":"ContainerStarted","Data":"8e020a0380fe10acc529d2a29c6f2cad80281fddcd2e9dd61f4140e6c916b324"} Jan 23 09:36:03 crc kubenswrapper[5117]: I0123 09:36:03.725554 5117 generic.go:358] "Generic (PLEG): container finished" podID="1645b5f8-1fe4-4c1a-9e89-25920b3f371c" containerID="443a2d017f07c1584e55de04a54269a1aaf433a9eddaf337be01f39efc255122" exitCode=0 Jan 23 09:36:03 crc kubenswrapper[5117]: I0123 09:36:03.725621 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" event={"ID":"1645b5f8-1fe4-4c1a-9e89-25920b3f371c","Type":"ContainerDied","Data":"443a2d017f07c1584e55de04a54269a1aaf433a9eddaf337be01f39efc255122"} Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.001057 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.117505 5117 reconciler_common.go:162] "operationExecutor.UnmountVolume started for volume \"kube-api-access-hntzk\" (UniqueName: \"kubernetes.io/projected/1645b5f8-1fe4-4c1a-9e89-25920b3f371c-kube-api-access-hntzk\") pod \"1645b5f8-1fe4-4c1a-9e89-25920b3f371c\" (UID: \"1645b5f8-1fe4-4c1a-9e89-25920b3f371c\") " Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.124571 5117 operation_generator.go:781] UnmountVolume.TearDown succeeded for volume "kubernetes.io/projected/1645b5f8-1fe4-4c1a-9e89-25920b3f371c-kube-api-access-hntzk" (OuterVolumeSpecName: "kube-api-access-hntzk") pod "1645b5f8-1fe4-4c1a-9e89-25920b3f371c" (UID: "1645b5f8-1fe4-4c1a-9e89-25920b3f371c"). InnerVolumeSpecName "kube-api-access-hntzk". PluginName "kubernetes.io/projected", VolumeGIDValue "" Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.218931 5117 reconciler_common.go:299] "Volume detached for volume \"kube-api-access-hntzk\" (UniqueName: \"kubernetes.io/projected/1645b5f8-1fe4-4c1a-9e89-25920b3f371c-kube-api-access-hntzk\") on node \"crc\" DevicePath \"\"" Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.745927 5117 util.go:48] "No ready sandbox for pod can be found. Need to start a new one" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.745933 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-infra/auto-csr-approver-29486016-rzbh7" event={"ID":"1645b5f8-1fe4-4c1a-9e89-25920b3f371c","Type":"ContainerDied","Data":"8e020a0380fe10acc529d2a29c6f2cad80281fddcd2e9dd61f4140e6c916b324"} Jan 23 09:36:05 crc kubenswrapper[5117]: I0123 09:36:05.746019 5117 pod_container_deletor.go:80] "Container not found in pod's containers" containerID="8e020a0380fe10acc529d2a29c6f2cad80281fddcd2e9dd61f4140e6c916b324" Jan 23 09:36:06 crc kubenswrapper[5117]: I0123 09:36:06.136665 5117 kubelet.go:2553] "SyncLoop DELETE" source="api" pods=["openshift-infra/auto-csr-approver-29486010-hnvf5"] Jan 23 09:36:06 crc kubenswrapper[5117]: I0123 09:36:06.142741 5117 kubelet.go:2547] "SyncLoop REMOVE" source="api" pods=["openshift-infra/auto-csr-approver-29486010-hnvf5"] Jan 23 09:36:06 crc kubenswrapper[5117]: I0123 09:36:06.819713 5117 kubelet_volumes.go:163] "Cleaned up orphaned pod volumes dir" podUID="099837f0-dc7e-4b26-9605-a03e93899b1b" path="/var/lib/kubelet/pods/099837f0-dc7e-4b26-9605-a03e93899b1b/volumes" Jan 23 09:36:07 crc kubenswrapper[5117]: I0123 09:36:07.770642 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:36:07 crc kubenswrapper[5117]: E0123 09:36:07.770985 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:36:18 crc kubenswrapper[5117]: I0123 09:36:18.777148 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:36:18 crc kubenswrapper[5117]: E0123 09:36:18.777801 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:36:31 crc kubenswrapper[5117]: I0123 09:36:31.770889 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:36:31 crc kubenswrapper[5117]: E0123 09:36:31.771808 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:36:38 crc kubenswrapper[5117]: I0123 09:36:38.555781 5117 scope.go:117] "RemoveContainer" containerID="f0257da8e2fd91def2c79d223d616f413a6bae47bbff12d12e71d8a30dc80d97" Jan 23 09:36:42 crc kubenswrapper[5117]: I0123 09:36:42.771178 5117 scope.go:117] "RemoveContainer" containerID="44f594d2028243b9932266f42d56125362a61f9193eeaef0cb957e3f440c0b62" Jan 23 09:36:42 crc kubenswrapper[5117]: E0123 09:36:42.771737 5117 pod_workers.go:1301] "Error syncing pod, skipping" err="failed to \"StartContainer\" for \"machine-config-daemon\" with CrashLoopBackOff: \"back-off 5m0s restarting failed container=machine-config-daemon pod=machine-config-daemon-qfh6g_openshift-machine-config-operator(2d41b436-a78c-412b-b56c-54b8d73381e6)\"" pod="openshift-machine-config-operator/machine-config-daemon-qfh6g" podUID="2d41b436-a78c-412b-b56c-54b8d73381e6" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.212947 5117 kubelet.go:2537] "SyncLoop ADD" source="api" pods=["openshift-marketplace/redhat-operators-hnlr2"] Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.214015 5117 cpu_manager.go:401] "RemoveStaleState: containerMap: removing container" podUID="1645b5f8-1fe4-4c1a-9e89-25920b3f371c" containerName="oc" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.214032 5117 state_mem.go:107] "Deleted CPUSet assignment" podUID="1645b5f8-1fe4-4c1a-9e89-25920b3f371c" containerName="oc" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.214244 5117 memory_manager.go:356] "RemoveStaleState removing state" podUID="1645b5f8-1fe4-4c1a-9e89-25920b3f371c" containerName="oc" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.222477 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.223862 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnlr2"] Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.368627 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"kube-api-access-kckwb\" (UniqueName: \"kubernetes.io/projected/8bde9227-d150-492d-bb97-68a186b6cf98-kube-api-access-kckwb\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.368734 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bde9227-d150-492d-bb97-68a186b6cf98-catalog-content\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.368767 5117 reconciler_common.go:251] "operationExecutor.VerifyControllerAttachedVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bde9227-d150-492d-bb97-68a186b6cf98-utilities\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.469748 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bde9227-d150-492d-bb97-68a186b6cf98-catalog-content\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.469843 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bde9227-d150-492d-bb97-68a186b6cf98-utilities\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.469914 5117 reconciler_common.go:224] "operationExecutor.MountVolume started for volume \"kube-api-access-kckwb\" (UniqueName: \"kubernetes.io/projected/8bde9227-d150-492d-bb97-68a186b6cf98-kube-api-access-kckwb\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.470407 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"catalog-content\" (UniqueName: \"kubernetes.io/empty-dir/8bde9227-d150-492d-bb97-68a186b6cf98-catalog-content\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.470526 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"utilities\" (UniqueName: \"kubernetes.io/empty-dir/8bde9227-d150-492d-bb97-68a186b6cf98-utilities\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.491799 5117 operation_generator.go:615] "MountVolume.SetUp succeeded for volume \"kube-api-access-kckwb\" (UniqueName: \"kubernetes.io/projected/8bde9227-d150-492d-bb97-68a186b6cf98-kube-api-access-kckwb\") pod \"redhat-operators-hnlr2\" (UID: \"8bde9227-d150-492d-bb97-68a186b6cf98\") " pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.543525 5117 util.go:30] "No sandbox for pod can be found. Need to start a new one" pod="openshift-marketplace/redhat-operators-hnlr2" Jan 23 09:36:49 crc kubenswrapper[5117]: I0123 09:36:49.951454 5117 kubelet.go:2544] "SyncLoop UPDATE" source="api" pods=["openshift-marketplace/redhat-operators-hnlr2"] Jan 23 09:36:50 crc kubenswrapper[5117]: I0123 09:36:50.182815 5117 generic.go:358] "Generic (PLEG): container finished" podID="8bde9227-d150-492d-bb97-68a186b6cf98" containerID="bf1edc917d93e0108305fe20caee92d66c44dc77b4f0aed1b17293cde1afbc3b" exitCode=0 Jan 23 09:36:50 crc kubenswrapper[5117]: I0123 09:36:50.182876 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnlr2" event={"ID":"8bde9227-d150-492d-bb97-68a186b6cf98","Type":"ContainerDied","Data":"bf1edc917d93e0108305fe20caee92d66c44dc77b4f0aed1b17293cde1afbc3b"} Jan 23 09:36:50 crc kubenswrapper[5117]: I0123 09:36:50.183155 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnlr2" event={"ID":"8bde9227-d150-492d-bb97-68a186b6cf98","Type":"ContainerStarted","Data":"5dbf24dc62872bdeb8647a806d181087c255f41202d044c31918053b4bf3ae2e"} Jan 23 09:36:51 crc kubenswrapper[5117]: I0123 09:36:51.194247 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnlr2" event={"ID":"8bde9227-d150-492d-bb97-68a186b6cf98","Type":"ContainerStarted","Data":"390c34c0b2ecad10db2884478fffaec2f16e976865dfab6ac04f28631cd7a697"} Jan 23 09:36:52 crc kubenswrapper[5117]: I0123 09:36:52.205858 5117 generic.go:358] "Generic (PLEG): container finished" podID="8bde9227-d150-492d-bb97-68a186b6cf98" containerID="390c34c0b2ecad10db2884478fffaec2f16e976865dfab6ac04f28631cd7a697" exitCode=0 Jan 23 09:36:52 crc kubenswrapper[5117]: I0123 09:36:52.206256 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnlr2" event={"ID":"8bde9227-d150-492d-bb97-68a186b6cf98","Type":"ContainerDied","Data":"390c34c0b2ecad10db2884478fffaec2f16e976865dfab6ac04f28631cd7a697"} Jan 23 09:36:53 crc kubenswrapper[5117]: I0123 09:36:53.215974 5117 kubelet.go:2569] "SyncLoop (PLEG): event for pod" pod="openshift-marketplace/redhat-operators-hnlr2" event={"ID":"8bde9227-d150-492d-bb97-68a186b6cf98","Type":"ContainerStarted","Data":"1f4e8e2c05e051fc8182bea0bd4a295b857c8bab70599fd70945fd261fba3a38"} var/home/core/zuul-output/logs/crc-cloud-workdir-crc-all-logs.tar.gz0000644000175000000000000000005515134640477024460 0ustar coreroot  Om77'(var/home/core/zuul-output/logs/crc-cloud/0000755000175000000000000000000015134640500017360 5ustar corerootvar/home/core/zuul-output/artifacts/0000755000175000017500000000000015134632771016516 5ustar corecorevar/home/core/zuul-output/docs/0000755000175000017500000000000015134632772015467 5ustar corecore